osker246

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by osker246

  1. That actually makes a lot more sense now. Yeah, I don't mind having it down for the couple days that I am away.
  2. I will be flying out to the other servers location soon and will have access to both booth drives while making the swap. If I understand what you saying, I would have to purchase two additional USB drives and then restore from backups. Which still makes thing a little difficult cause both servers have different configurations, plugins, shares, etc. I'm assuming the UUID information for the USB drive is stored in the .key file, no? If that is the case is it not easier to just swap everything over from the config folder on both drives, except for the .key file? This way it retains the configuration of everything.
  3. Great idea on the backup, I didn't even consider that. I am flying out soon to where the other server is located, I was going to bring the USB drive with me and perform the swap.
  4. So I have two servers, server A has a basic license and server B has a plus license. I'd like to move the plus license to Server A and the basic license to server B. What exactly do I have to do to make this change as smooth as possible without ruining anything? Do I just copy/paste the USB config folder in each USB drive to the other drive, leaving behind the basic/plus.key in each USB drive? I'd like to get a clear understanding of how to get this done as both servers are located in different states and I will not have time to trouble shoot any problems if they arise. Any input is appreciated, thanks!
  5. Here is my volume mappings and download shares. I'm not quite sure where or what the the docker run command is. Care to elaborate where I can find this information? I should also say that I did a reinstall on the unRAID OS, wiped the cache drive, and did a fresh install of all containers and plugins and the issue is still persisting.
  6. I posted this on /r/unraid but I figure I might try and get some help here too. I need some helping figuring out how to stop NZBGet from making my server completely unresponsive when it is working. I can't load unRAID's GUI nor can I access any docker containers when it is downloading. Shortly after a download is initiated, the cpu usage spikes to 100% and I am locked out until NZBGet completes its tasks. I've tried restricting NZBGet to 4Gb of memory and 1 of the 4 cores via --cpuset-cpus=0 --memory=4G. According to "docker stats" the memory is restricted correctly, but it still utilized all 4 of the available cores. Does anybody have an idea how to stop this issue? Also, NZBGet downloads and unpacks to the cache SSD drive and not the array. M/B: ASRock - H97M-ITX/ac CPU: Intel® Core™ i5-4460 CPU @ 3.20GHz
  7. Yeah, that's what I have been doing. I was just under the impression that I could manually execute the cron job in the command prompt for testing purposes. So when I would enter the path to the crontab in the command prompt, it would give the error. I just realized it was working properly when one of my crontab's executed perfectly fine. Thanks for the help though!
  8. Well I'm a complete idiot and everything works as it should. I was under the impression that I could manually run cron tab by entering the path of the crontab in the command prompt. Turns out this is not the case. Sorry to waste your time.
  9. Thanks for the response! I'm assuming by full path that I should do: 0 12 * * 7 /usr/bin/rsync -a --ignore-existing --ignore-errors --delete --log-file=/mnt/cache/appdata/rsync.log /mnt/user/audiobooks/ /mnt/disks/192.168.1.150_audiobooks |& logger Correct? If so, the same issue of command not found occurred.
  10. It happens when I run it in the background. It's just strange that it only occurs when using the plug-in. Sent from my XT1575 using Tapatalk
  11. I have been having an issue where I use rsync, via User Scripts, to perform backups to a remote server and that would make the webGUI unresponsive until the task was completed. Strange thing is I can run the rsync task via shell and I do not have any issues not being able to use the webGUI. Any idea why this is occurring?
  12. So for the past week I have spent hours trying to troubleshoot this problem. I initially used User Scripts plugin to create a cron job that would backup some shares to a remote server. I eventually noticed once the script was run via User Scripts my local servers webGUI would be completely unresponsive. At first I thought something had crashed and would end up performing a unclean shutdown. Being that I am trying to backup large files over the internet, I would sometimes not be able to access the webGUI for a whole day. When I had the servers on the same LAN I did not have this issue at all since file transfers occurred much quicker. Now seeking out an alternative method to run the cronjob I created a crontab in /boot/config/plugins/dynamix. The crontab has the following: 0 12 * * 7 rsync -a --ignore-existing --ignore-errors --delete --log-file=/mnt/cache/appdata/rsync.log /mnt/user/audiobooks/ /mnt/disks/192.168.1.150_audiobooks |& logger However the job does not occur and the syslog shows: Aug 29 22:26:40 UNRAID root: ./audiobooks.cron: line 1: 0: command not found. The strange thing is before using User Scripts I performed my rsync jobs this way and it worked fine. I can even manually run this rsync job via shell and it works perfectly fine. What is happening here? Is my syntax incorrect? I don't have anything else in the crontab so I am unsure why this is an issue now. Any help is appreciated! unraid-diagnostics-20170829-2227.zip
  13. Okay so I think something else may be the cause of this and I cannot figure it out. I'm not running any rsync jobs and the server webinterface is completely unresponsive. What is going on?
  14. I recently setup 2 unRAID servers at remote locations for backup purposes and had them connected via openvpn server/client on the router level. After server1 initiates rsync, the server GUI of the server 1 does not respond shortly after. I can still shell into the server and access the docker containers GUI, just not the server GUI. I've tried performing the file transfer from server1 to server2 via MC and the same result happens. Why is this happening? Is there a more efficient way of conducting the sync/copy process? Server1 specs MB: Asrock-h97m-ITX/ac CPU: i5- 4460 RAM: 16GB
  15. Same experience. I got fed up and wiped the container and started from scratch and it fixed it for a few weeks. The issue popped up again just the other day so I had to start from scratch again. Sent from my XT1575 using Tapatalk
  16. Thanks for the response. So I actually ended up doing a restart on unRAID in hopes of it fixing the issue. Sadly I did not work. After tinkering around for a bit after I learned that Plex no longer liked the user share I was using for some reason. I transfered my media to a different user share and Plex would play back fine.
  17. I totally understand how to manage my users and servers via Plex.tv. I'm just boggled by how a new Plex docker setup can present me with my home users on first boot without me logging into my Plex account. This even happened after I reset unRAID and reassigned my drives. Anyways, I didn't mean to try and take over your thread but I think my issue stemmed from my media being in a corrupted user share. Creating and moving my tv shows into another user share fixed the issue.
  18. I would like to know this too. I am having issues with Plex not being able to play back files and I tried reformatting my cache drive to remove the docker image and appdata folder and it still does not work properly. There has to be some kind of residual configuration folder outside of the appdata folder that unRAID is holding on to. Even after adding the plex docker again I am greeted by users in my plex home on the configuration page of plex. How does this happen on a newly installation of Plex?
  19. So i've recently run into an issue with Plex being unable to play media files after purchasing Plex pass and tinkering with some transcode directories. I've tried trouble shooting for the past day and decided to say screw it and deleted my docker.img and appdata folder where my dockers were installed. I deleted these by reformatting my cache drive, where the two resided. However after setting up a new docker image and installing plex in the appdata folder, it seems as if there is residual configuration folders outside of the cache drive. I say this because even after reformatting, I will open Plex for the first time and I am presented with users that are assigned to my Plex home, even though it is a new plex installation. To top it off I am still unable to play my media files. I know these files play properly outside of Plex too. How is this even possible? I've removed my templates from the dockers menu and formatted the cache. Other than starting fresh on unRAID, I don't know how to fix this issue. I have a 4x4TB drives, 1 Parity and 3 data disks. Can anybody here help?
  20. split level only applies to new files. Existing files have to be moved manually if you want them on another drive. happens easily if you make it too restrictive. Okay, so what exactly was done which made it too restrictive? Was it because I choose split level 2?
  21. I did learn that split level take precedence in my search of trying to figure out the issues. When I initially set it up the server with split level 2 my data had no issue being allocated among the drives. Anyways, when adding back my data via ftp, the path I used was /mnt/user/media(share)/shows/. Was this an incorrect path? To be quite honest, split level confuses the hell out of me. When I originally set up the media shares, I spent quite a while working out what level to use, made diagrams, etc and got it right. After that, I haven't touched the folder organization as I just don't want to think about it again (And every other share I have set to split any and all folders because I don't care where the files are actually stored.) But I would guess to drop your level down by one and try it again. It still confuses the hell out of me too. The only reason I went with split level 2 was because of a discussion I had with an individual on reddit. Now I am wondering if I have been using the correct path to add my data to the array.... So if I were to switch to split level 3, will it retroactively redistribute the data? Or is it only for new data? Also, is it common to for split levels to fill a drive before utilizing other drives?
  22. I did learn that split level take precedence in my search of trying to figure out the issues. When I initially set it up the server with split level 2 my data had no issue being allocated among the drives. Anyways, when adding back my data via ftp, the path I used was /mnt/user/media(share)/shows/. Was this an incorrect path?
  23. So long story short. The other day I logged on to unRAID to find one of my drives was unmountable. Me being stupid, messed around with new configuration settings and screwed up the array. I ended up reformatting my data drives and kept my cache drive intact. My cache drive stores docker app configurations. I initiated the transfer of my data to unRAID and this morning I come to find that highwater allocation is not distributing data across the drives. It added a bit of data to disk 2, but not much. What exactly happened? Was I supposed to reset some kind of setting before adding data back to the drive? I am contemplating on just deleting all my data/cache drives and starting over fresh. Is there an official method of doing so?