TheSpook

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TheSpook's Achievements

Noob

Noob (1/14)

4

Reputation

  1. I tried all the editing Preferences.xml crap and it didn't work. Adding the Claim ID worked for me (-e PLEX_CLAIM= `claim-xxxxxxx`) BUT What tripped me up the 1st time is it doesn't work if you are behind a reverse proxy (like nignx or LinuxServer SWAG) and use the proxy address. Apparently, you have to go directly to the webserver's IP (eg http://192.168.1.x:32400/web) and do it that way.
  2. Hi @1unraid_user You could add this line to the script (or to your command when you run the script: find /path/to/backupfiles/ -name *.tgz -mtime +5 -exec rm {} \; That will delete all files on a folder more than 5 days old. You can change the number to 14 (for 2 weeks) etc. Make sure you choose a folder with ONLY the backups in it an no other files.
  3. Yup that's pretty much it. I created an Ubuntu VM and made sure to add the UnRAID share /mnt/user (or wherever the data is you want to backup) Then I jumped into the VM and installed NFS: sudo apt-get update; sudo apt-get upgrade; sudo apt-get install nfs-common Then I created the folder in Ubuntu where I wanted to mount my shares: sudo mkdir /mnt/shares/share1; sudo mkdir /mnt/shares/share2 Then I added the NFS share(s) to /etc/fstab so it would load at boot: eg your UnRAID server IP is w.x.y.z add line: w.x.y.z:/mnt/user/share1 /mnt/shares/share1 nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0 to /etc/fstab If you have other shares you want to add, just add them in seperate lines: w.x.y.z:/mnt/user/share2 /mnt/shares/share2 nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0 I avoided mounting the whole /mnt/user because it seemed safer to me only adding what I needed to backup. I then installed cpanminus (frommemory this is needed for the idrive scripts: sudo cpan App::cpanminus Then installed zip and unzip (also needed for the idrive scripts): sudo apt-get install zip unzip Then copied the idrive Linux scripts into /opt/idrive And lastly I ran perl /opt/idrive/login.pl and perl /opt/idrive/account_setting.pl That should be it. Just follow the prompts in the scripts to configure your idrive backup location (just use the defaults). After that, you can log into iDrive and use the GUI to setup your backup schedule and what files you want to backup (these will be the folders that you mounted earlier in /mnt/shares/. Let me know how you go
  4. If you're interested in that method and want me to take you through the steps I took, let me know.
  5. Hi boohbah. Yeah I ran into this issue too. I didn't notice it at first as I basically never restarted that container, but then realised at a later date. I think you might have to build a docker image with the cron script (and the IDrive scripts) already included. Unfortunately, that was a little beyond me. So in the end I just used a small Ubuntu VM in iDrive (1GB RAM and 1 CPU) and ran the scripts on that. The IDrive service seems to start successfully and so far it has been working well for me. Pretty much the same process applies - when you create the VM, you just mount the NFS shares and run the idrive script to log on. Once idrive is logged onto the account, you can choose the mounted share in the idrive web console...
  6. So sorry! I come here so rarely and didn't turn on "Notify me of replies". My bad! If you're still intrested, here is the method I used. 1. Create a new Docker container based on a version of Linux that has all the dependancies. I used the "Python" container because it seemed to have Perl and everything I needed. I mounted a folder for the config file as well ad the /mnt/user directory so I can choose what to backup. You have to run the Docker container in interactive mode or it will just shut down because it has nothing to do. I couldn't find a way to do this in the GUI. So just open a teminal (the little symbol in the top right hand corner of the unRAID dashboard) and type: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -it --name='idrive_ubuntu-ur1' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -v '/mnt/user/':'/data':'ro' -v '/mnt/user/appdata/idrive_ubuntu/':'/config':'rw' 'python' Change your Timezone and the name of the container if you want. I mounted the /mnt/user folder as read only because it should only need to read from that folder for the backups. 2. Once this is done, exit out of the terminal. Then go to your Docker tab in unRAID. You should see the new container you just created. 3. Copy the idrive Linux scripts into the folder /mnt/user/appdata/idrive_ubuntu/. If you don't have the scripts, let me know and I can send them to you. 4. (optional) Modify the account_settings.pl script and add your idrive username and password. This will avoid you having to manually type them in later: a) change the line my $uname = Helpers::getAndValidate(['enter_your', " ", $Configuration::appType, " ", 'username', ': '], "username", 1); to my $uname = '[email protected]'; b) change the line my $upasswd = Helpers::getAndValidate(['enter_your', " ", $Configuration::appType, " ", 'password', ': '], "password", 0); to my $upasswd = 'password'; c) save the modified file as account_settings2.pl 5. Bash into the new container (in the Docker tab of unRAID, click on the question mark icon next to the container name and click Console. 6. Once you are in, type: perl /config/scripts/account_setting2.pl (or if you didn't modify the script, perl /config/scripts/account_setting.pl) 7. That's it, just follow the prompts to setup the server in your idrive account. Once it is setup, you can configure what you want to backup from your idrive dashboard (https://www.idrive.com/idrive/in/console?path=/remote/devices). You should see your new docker container there. If you click on Settings you can choose what to backup. Expand the data folder and you will see all your shares in unRAID. Let me know if you have any issues!
  7. Thanks for your help. I ended up using Backup-Restore Appdata to get back the copy form the night before. Not too much lost, but a good lesson that next time I do a manual backup before replacing a failed drive.
  8. OK - sounds like I have to bite the bullet and restore the backup. So is my plan off attack: 1. Tell the system to stop using a cache drive (how do I do that?) 2. Put the new SSD in and build a new cache 3. Restore the Appdata to the new cache 4. Will my docker containers magically come back (at the moment I'm seeing no containers because they were all in appdata)? 5. Order a second SSD for redundancy on the cache again (not that it really helped last time!) Thanks again - I really appreciate the assistance.
  9. Thanks for the feedback. Yes the drives are both showing as failed on the server. However, I was hoping that the problem is one of a locked partition or something like that, rather than a physical failure. The reason I suspect it might be that was due to the timing. As I say - one failed, but the other was looking fine - until I pulled it out (while the array was stopped) then as soon as I put it back in, it failed straight away. When I put the drive in another Linux server, I see this error: I get the following error: ata1 comreset failed errno=-16. I read elsewhere that this could be a BTFRS or a patition problem. Even if I can't recreate the cache pool with these drives, I'm hoping I can get the data off them? For example, this post: https://askubuntu.com/questions/62295/how-to-fix-a-comreset-failed-error says to run fsck. But I wanted to see here if this is a good idea or if it might be destructive on the drive. Also, if the drive is part of a cache pool, will it have a full copy of the data on it (like a standard mirror) or are both drives needed to reconstruct the data?
  10. Sure. Attached. Thanks for helping! unraid01-diagnostics-20200805-1749.zip
  11. I should also add that I have tried the steps listed here: The problem is, the drives aren't showing at all. Even if I run fdisk -l, I can't see the two old cache drives.
  12. Hi all. I have a Proliant DL380pG8 running unRAID 6.8.3. I had 2 x 240GB SSDs setup as a cache pool. The other day, one of my cache drives just went offline. I saw a number of errors in unRAID like: Jul 31 16:41:29 UNRAID01 kernel: BTRFS error (device sdh1): bdev /dev/sdh1 errs: wr 4442242, rd 272474, flush 44513, corrupt 0, gen 0 Jul 31 16:41:28 UNRAID01 kernel: BTRFS warning (device sdh1): lost page write due to IO error on /dev/sdh1 Jul 31 16:41:27 UNRAID01 kernel: BTRFS error (device sdh1): error writing primary super block to device 1 But since I still had a healthy drive in the pool and my system was still running, I just ordered a replacement drive. Yesterday the new drive arrived and I tried to replace it. I stopped the array, then removed what I thought was the bad drive. Unfortunately, I actually pulled out the good cache drive instead. "No bad" I thought "the array is offline, I'll just put that one back in". But when I slotted it back in, it was immediately also stuffed. My led indicator on the drive was orange - which according to HP means "The drive has failed" and unRAID couldn't see the drive at all. The same happened when I tried to put the old failed cache drive in. I've ruled out a controller or a drive caddy error - the new drive is recognised by the server and unRAID in both slots. So I tried connecting the two old cache drives into a normal Linux box to see what it says. When I do so, I get the following error: ata1 comreset failed errno=-16. Some Googling tells me that this could simply be due to errors in the partition or maybe the btfrs file system. However, I'm loathe to try any repairs at this stage in case I do further damage. My gut tells me this is a data error rather than an actual hardware error - because the coincidence would be too high... I do have an appdata backup from the night before, but I'm running a number of mysql databases in Docker containers and it would be a real pain to lose a day's data on them. Thanks in advance. I really appreciate in advance any advice anyone could give!
  13. Thanks for pointing me in the right direction. Same fix worked for me. I had CA backup/restore AppData set to terminate apps if they don't shut down in 60 seconds. I'm guessing that this was the cause of the issue. My MariaDB container must not have shut down in 60 seconds, so it was terminated by force and that resulted in the dirty state. I've increased the timeout to 240 secs to be sure. Hopefully that avoids this issue in the future.
  14. This is an old thread, but I got this working using a Docker container if anyone is interested. Takes some basic Linux commands but nothing too difficult.
  15. Ah ok makes sense. Can you point me possibly in the direction I should look? Where is the config to start QBT automatically? OR are there any logs? I have tried to run the container twice - (with different DNS servers just in case) and had the same situation both times.