TangoEchoAlpha

Members
  • Posts

    46
  • Joined

  • Last visited

Everything posted by TangoEchoAlpha

  1. Thanks, found the command line version for it (DEEP_ARCHIVE) after I made those previous posts. But as you say, for big files you don't need quickly it's massively cheaper!
  2. Also running a Ryzen APU in Unraid 6.9.2 here, albeit that I am running a 2400G on a MSI X470 motherboard. The only issue I have is that I can't get VMs up and running, which I suspect is just down to enabling AMD-V in the BIOS somewhere - not that I've really looked into it at all. Would be interested to know what issues you had, if you want to try Unraid again?
  3. Hi Joch - Sorry, I meant to add to this thread earlier this morning. I added the --verbose flag to the command parameters and now I am successfully uploading to S3 straight into the Glacier storage class. Maybe the lock file got cleaned up in the interim, maybe it's a co-incidence but it's working! Am I right in thinking that this will not support either client side or server side encryption due to the need to do the MD5 hash as part of the file comparison? Thanks 😀
  4. I'm trying out this container, hoping to automate my AWS backups with a lightweight solution! At the moment I use a Windows app called FastGlacier to manually backup files to AWS, but obviously an automated solution would be better. I have installed the container and set my configuration as per the following: As I want to use Glacier for backup and the lower cost, my storage-class command parameter is set to GLACIER which according to https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html is a supported option. After starting the container and checking the logs, I saw this error saying that the bucket didn't exist: I don't know if there's a way to create the bucket automatically if it doesn't exist, but in the meantime I consoled into the container and used s3cmd to make a new bucket. I then checked the bucket exists and also verified that I could manually use s3cmd to upload a file from my container's 'data' path: But I am now seeing a similar issue to that which joeschmoe saw, the job fails due to an existing S3 lockfile: Based upon joeschmoe's findings, I am assuming this is because s3cmd doesn't like my s3 parameter being set to GLACIER - yet the same option runs fine from the command line when I ran s3cmd sync manually. I can also see that the file uploaded successfully in the bucket using the AWS management console. I would really like to get this working so would be grateful for any pointers! I did try restarting the container several times in case that would help cleanout /tmp but it didn't change the behaviour.
  5. I may be miles off with this suggestion, but are you sure the 10GB NIC is conflicting with the integrated NIC? Wondering if you might have a router which supports LAN isolation / VLAN in which case 192.168.2.1 may be the default gateway for a second LAN on the router?
  6. Well that certainly seems to be the case here. I started up a live image of Linux Mint on the server and found that the network connection wouldn't start. Moved the ethernet cable to a different port on the switch and it connected fine. So I went back to my Unraid console and found that my Unraid server now appears in Windows Network. Pinging 192.168.1.231 works and invader resolves to that IP address. Equally I can ping my Windows PC from the Unraid console. I've never had a port on a switch go like that, and it's not a cheap switch either (Draytek P2121 - went for something decent to power the home CCTV cameras). Thank you all for the help - I feel like a bit of a plonker!
  7. Right, pulled the spare network card from the system, leaving just eth0. Makes no difference - I've attached another diagnostics file invader-diagnostics-20201011-2212.zip
  8. I agree. The DHCP pool is setup to allocate 192.1681.231 to my server on the router. I'm sure I could then change the Unraid server to not use a static IP as it's a bit belt and braces
  9. By system does have multiple NICs, one on the motherboard (which the system is connected to the network with as eth0) and the other two eth 1 and eth2 are on a separate network card. I was going to try and setup bonding but the documentation for my switch was less than easy to follow. I can try removing that extra network card, but the system has had it in there for a while. It's 10pm here in the UK and I have to be up at 5am for work tomorrow. Will check the thread when I get home from work, so if I don't reply please don't think I'm ignoring anyone kind enough to try and help!
  10. Diagnostics file attached, thanks for the help guys. invader-diagnostics-20201011-2146.zip
  11. I will give it a go. Just plugged the server USB stick into my Windows computer, can see /config and other directories on there, so I've made a copy of the USB drive contents before doing anything else. I'll reboot and see what happens! EDIT: Nope, can't ping the server using its local IP address and vice versa, can't ping anything else on the local network from the Unraid text console.
  12. So had a power cut whilst I was at work today, come back home and restarted the Unraid server to find that it's no longer contactable on the network. I am able to boot to the Unraid text console and log in, but despite the console stating that Samba has started and the server is supposedly using the correct IP address, I just can't see it on the network. I am at a loss as to what to do - I can login to the console as root, but after that nothing seems to happen. I can't connect to Unraid using a browser or start it in GUI mode. I'd appreciate any help or assistance anyone can offer, thanks!
  13. Managed to fix it, time to do a quick backup of my config! The issue was my network configuration - for some reason, network.cfg was a bit borked! Edited the file, removed the references for the second NIC - the lines between and including IFNAME[1]="eth1" and GATEWAY[1]="192.168.1.1" The changed SYSNICS to be 1 and reboot. Phew, an absolute relief!
  14. I would really appreciate some help with this one, attached is my diagnostics dump. I have recently lost internet access on my Unraid server, but local networking works fine. I also went against advice of "don't change anything" to changing quite a bit in an attempt to fix it! Currently running Unraid 6.7.2, however I did do a upgrade to the recent RC and it was then that I lost internet connectivity. Here's a list of what I changed in an attempt to fix it: 1) Rolled back from the RC to 6.7.2 stable 2) Pulled the extra NIC that I have in my system (would have been eth1 and eth2 - eth0 is motherboard NIC) 3) Removed the OpenVPN client as I assumed it might have not liked the upgrade to the RC - it kept failing to start up tun5. Some extra info that might help: 192.168.1.231 is my Unraid server (invader) 192.168.1.230 is PiHole running on a Raspberry Pi 4, acting as DNS1 192.168.1.233 is PiHole running in a Docker on my Unraid server, acting as DNS2 192.168.1.1 is my router/gateway, a Ubiquiti router. DHCP is setup on this, with the above IP addresses set via static routes. The DNS servers are also set to the two PiHoles. Strangely Unraid seems just as unhappy if I change the DNS servers in network settings to be something like 8.8.8.8 and 8.8.4.4 Obviously it's not a network issue per se - I have internet access from other devices on my network and I can access SMB shares on the Unraid server/access the GUI/download the diagnostics report. I'd be really grateful for any suggestions before I pull out what little hair I have left invader-diagnostics-20191123-2126.zip
  15. Just to follow this up in case anyone finds the thread. Having been running Unraid for a little while now, I find myself using it for more and more. I've got quite a lot of Dockers running, with 6 drives of varying capacity, 1 parity drive, 1 M2 cache drive and obviously my USB drive with Unraid on it. Generally it runs OK and I think for people who have light demands of Unraid, an APU will be fine. However, when my server decides its time to do a weekly parity check, plus the mover runs to move files form the cache drive to the main array - the whole thing grinds to a very slow pace. You can see the APU threads being given a right thrashing. I also run Shinobi in a docker container and need the system to be capable of continuing to run when a parity check/cache mover kicks off. I'm going to be retiring the APU when I upgrade my current main rig, which has a 2700X in it. Hope this helps people with food for thought if they find this thread in a search
  16. Just to say was looking for the answer to this as well - thanks @Squid
  17. I haven't seen any evidence of it not! I've had it running for sometime now, seems completely stable. I've got five drives (one an external USB drive) plus a parity and it works well for what I use it for. I'm also making use of several dockers, such as Krusader, Shinobi, Pi-Hole, Youtube Downloader and Plex. I expect streaming from Plex to many clients would be an issue, but for my wife and I - it's fine.
  18. I went with a 2400G in the end, on a MSI X470 Gaming Carbon motherboard. I did consider a B450 motherboard, but decided to go X470 and get the extra SATA ports.
  19. Hi all - New user here, looking to build my first ever NAS and decided that Unraid is the best option versus the others. At the moment I am trying to decide whether to build my Unraid box around a Ryzen 3400G APU or a Pentium G5600? I plan on using Unraid in a home environment where it's going to be used for file storage for the family and storing footage directly from IP CCTV cameras. I expect it will also be used to serve movies etc on Plex and run Dockers - but I am not planning on running any VMs and gaming inside the VMs - so pass through is not an issue. Thanks 🙂