Leaderboard

Popular Content

Showing content with the highest reputation on 04/16/19 in Posts

  1. Welcome aboard. No matter where you are from. Most of us are HUGE fans of unRAID and have been so for many many years Wait until you see the update process of unRAID as new versions drop. Simple click a button and reboot and your off and running again. No JailBreaking needed here. Beat up that install of yours and keep in mind like mentioned up above we are here to help. 😀
    2 points
  2. I thought I share my rsync config. I still feel like Unraid as a Storage Server should provide a build in method to backup or sync its data to an other location, but configuring rsync gives you a lot of power, and once set up, it's just great. I based my config and setup on the tutorial found here: https://lime-technology.com/forums/topic/52830-syncronize-servers-using-rsync-over-ssh-next-door-or-across-the-world/ And I made some adjustments, specially the option to watch the sync going on, and the notification that works with the build in unraid notification. So on "errors" you will get a E-Mail notification if your unraid is setup for that. The above tutorial covers the SSH setup and connect via key instead of login, and the basic rsync setup. You can continue after that guide. What you need is the NerdPack Plugin, and there install "screen". It is used to run rsync in it own screen, giving you the option to pull the progress up in a console at any time. The second one is optional but recommended, its the "User Scripts" plugin. It allows you to add own scripts and execute it via cron in your unraid GUI. I created 2 scripts for my rsync: The first one I called "rsync" and does the rsync job, and depending on its exit code, it gives the notification to the Unraid Notification Service. #!/bin/bash rsync -avu --bwlimit=4000 --numeric-ids --log-file=/mnt/user/Backup/rsync-logs/log.`date '+%Y_%m_%d__%H_%M_%S'`.log --progress --exclude 'C Sys' --timeout=600 -e "ssh -p 4222 -i /root/.ssh/DSStoDMS-key -T -o Compression=no -x " /mnt/user/Backup/Devices/ [email protected]:/volume2/Backup/Unraid_rsync if [ $? -eq 0 ] then /usr/local/emhttp/webGui/scripts/notify -s "`hostname` to DMS Rsync Backup complete" -d "Sync compled. `date` `tail /mnt/user/Backup/rsync-logs/last.log`" else /usr/local/emhttp/webGui/scripts/notify -s "`hostname` to DMS Rsync Backup FAILED" -i "alert" -d "Sync compled. `date` `tail /mnt/user/Backup/rsync-logs/last.log`" fi Of course, the rsync command would need some adjustments. If your Unraid server is setup to send you a E-Mail on errors, it will do that too if rsync fails! The second one is for calling that script in a "screen" session. #!/bin/bash export SCREENDIR=/root/.screen screen -dmS rsyncdms /boot/config/plugins/user.scripts/scripts/rsync/script You can pull it up as long as it is running with screen -r from a console. You can just use the build in one from the WebGUI. Attention, if you name the first script different than "rsync", you have to adjust this location. Now just set a schedule for the second script. Watch its progress with "rsync -r" I hope this can help somebody. Im not a rsync expert, so if anyone is around with more experience has some input, its very welcome. And I have a questions myself: I have large files to transfer, sometimes a lot of new data comes up at once. So if I have to transfer >400GB, it needs more than 24h, and the rsync backup script (which is scheduled to run daily) is started again. It starts with the files that is currently transfering in the first instance of rsync. It works, but it will transfer that big file twice. Should I check when running the script if rsync is already running? Or does running rsync with --partial do any good or bad here?
    1 point
  3. https://www.amazon.com/gp/product/B015CQ8DCS/ IOMMU Group [10b5:8609] 41:00.0 PCI bridge: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA (rev ba) [10b5:8609] 41:00.1 System peripheral: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA (rev ba) [10b5:8609] 42:01.0 PCI bridge: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA (rev ba) [10b5:8609] 42:05.0 PCI bridge: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA (rev ba) [10b5:8609] 42:07.0 PCI bridge: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA (rev ba) [10b5:8609] 42:09.0 PCI bridge: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA (rev ba) [1b21:1142] 43:00.0 USB controller: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142] 44:00.0 USB controller: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142] 45:00.0 USB controller: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142] 46:00.0 USB controller: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller Using pci-stub.ids=1b21:1142 in the syslinix.cfg lets one or more of the 4 controllers to bind to a VM (but not different VM's without the PCIe override being set). I tested thumb drives & had three Brio web cams ingesting at 4K simultaneously without issue. No need to stub 10b5:8609. The card is not compatible with UNRAID's "BIND" via vfio-pci.cfg
    1 point
  4. Is this True? I thought all proceeds went to saving the Australian Albino Hump Back Mosquito?
    1 point
  5. FYI, that command won't "solve" anything, but rather delays it from happening again. The solution is this https://forums.unraid.net/topic/57181-real-docker-faq/#comment-564326
    1 point
  6. Welcome to the Horde! Now show your dedication by branding yourself. Either get an Unraid tattoo or pickup some Unraid merch. https://www.zazzle.co.uk/store/unraid P.S Unraid are donating the proceeds to save the ocean
    1 point
  7. Depending what HBA you got, you might get better throughput using the on board. This too :-)
    1 point
  8. Thanks. I'm no longer on a 30 day trial, purchased license yesterday as the the system is up and stable with dockers installed and not giving me any grief. I'm kicking myself...as I should have installed unRAID at the time FreeNAS Corral disaster 2 years ago as I do recall looking for other NAS OS's. Oh well hindsight is a wonderful thing.
    1 point
  9. Depending on the SSDs/HBA you have trim might not work, so better leave them on the onboard ports, as for the disks it won't make any difference.
    1 point
  10. I bought both 8tb and 10tb about two months ago in another sale, and i can confirm these have the 3.3V reset issue, and you will need to do the following, How to Fix the 3.3V Pin Issue in White Label Disks Shucked from Western Digital 8TB Easystore Drives
    1 point
  11. I am also new to Unraid and I'm having the exact same issue. I have not purchased a license yet, running on trial... License purchase will depend on this working. Unraid Version: 6.7.0-rc7 AMD Ryzen 1700X Radeon RX 570 With the GPU attached to Windows 10 VM, it either won't boot at all, gives me the `127` error or it tells me that it's not able to power up and stuck in the D3 mode. jarvis-diagnostics-20190416-0400.zip
    1 point
  12. No it's my fault. A piece of experimental code I was working on mistakenly wound up in the release version. Check for updates
    1 point
  13. No - it means that the first 24 bits of the address are fixed and will consist of 192.168.1, but that the last 8 bits can be anything.
    1 point
  14. Great to hear! Welcome to the community and don't hesitate to ask questions if they arise. Cheers
    1 point
  15. rclone isn't found because rclone isn't installed inside the container you would have to either run rclone on the host or fork my code and include rclone in the build of the docker image Sent from my EML-L29 using Tapatalk
    1 point
  16. Oh right sure, you can point beets at any folder you've mapped to it. so in this case you could set: copy: no move: no then you can run: beet import /libraryfolder/Abba-Album2 to specify the album you want included. Ok sure, I had my own MB server running at one point. I ended up removing it as I could never be sure if it was causing problems or not. Once the initial library scan is complete the scan times aren't that bad. Besides, I find discogs provides me with better matching anyway. I'd suggest putting the local MB server on the backburner until you have everything else working smoothly. You should only run DSNP if a container (or user) has been editing files as a user other than users:nobody, if beets is set correctly then check the app which is creating the files in the first place (downloader or file browser) As mentioned in the documents, beets has its limitations when it comes to perfect matching of releases. You can increase or lower the threshold before beets requires user input, I have mine set at 82% of a confident match, anything below will require me to manually match. Too many or too few tracks always triggers a manual import where I look up the id on discogs or MB then input manually. Look up the documentation on the fetchart plugin, you can be very specific about artwork sourcing. I don't use it myself rather relying on existing artwork in the folder. Not wanting to pass you off onto someone else, but I generally find the beets google groups to be very helpful technically when it comes to beets support. That's where I go for assistance.
    1 point
  17. https://superuser.com/a/1085558 Perhaps something alone the lines of this?
    1 point
  18. Why did you add it to the plex container?
    1 point