Jump to content

Hoopster

Members
  • Posts

    4,568
  • Joined

  • Last visited

  • Days Won

    26

Everything posted by Hoopster

  1. My bad, I read c:\ProgramData\Crashplan and my brain saw c:\ProgramFiles\Crashplan. I have Crashplan installed in a different path so I put the .ui_info file in {path to Crashplan} directory. I have now copied it to c:\ProgramData\Crashplan. There was another .ui_info file there which I renamed. However, I still cannot get the client connected to the backup engine with the other ui.properties settings. The only way I can get it to connect is to comment out the "serviceHost [iP address of unRAID server]" line in ui.properties and set service port to 4243 (not 4200). This is what this file looked like prior to upgrade to 4.3 and all was working well. However, when the client connects it does not recognize any of the backup sets defined on my unRAID server and acts like I have no Crashplan subscription. It is like it is starting over with no Crashplan license. If I enter the license key in the Windows client it warns me the license is in use by my unRAID instance and that all backup data will be erased if I transfer the license to the Windows instance. ??
  2. My Windows Crashplan client (version 4.3) will not connect to the Crashplan engine through a Putty SSH session. Like everyone else, it was working fine until this 4.3 update. I have done the following per suggestions in this thread: -Manually updated Windows client to 4.3 64-bit version (prior 4.2 version was also not connecting) -Copied .ui.info from the Crashplan docker ID directory to the Crashplan directory on the Windows desktop -Changed the Crashplan\conf\ui.properties as follows (yes, I uncommented the lines): serviceHost=192.168.1.10 (My unRAID IP) servicePort=4200 -Edited my.service.xml in Crashplan docker to set <serviceHost> field to 0.0.0.0 (It was 127.0.0.1) I did this one at time and tried to connect after each step. Connection to engine still fails. Any ideas? What is the best way to verify that the Crashplan docker engine is also at version 4.3? I did run a check for updates a couple of times from the Docker tab and it reports up-to-date on all dockers including Crashplan. EDIT: Found in the app.log in the Crashplan Log directory that it says the engine version is 4.3. So, I now have a 4.3 client that cannot connect to the 4.3 backup engine.
  3. I am running unRAID v6.01 and the latest version of unMENU works with it just fine. I rarely use anything in unMENU anymore since the new Web GUI is so complete; however, like many I have it installed for emergency situations which, fortunately for me, have not yet occurred in unRAID 6. I used to see frequent web GUI hangs in unRAID 5, but, since moving over to v6 beginning with beta10, I have experienced a GUI hang just once. I have no idea what is causing yours to hang, but, there are several mentions in these forums of the Unassigned Devices plugin being a suspect. Are you running that?
  4. You can see it in the variables on the Advanced View page (upper right corner) when editing the docker config. It's something like webUI_changme (I can't remember as I changed it )
  5. Sorry, I misunderstood your prior post and thought /config was the mistake and that mapping should have been as in previous build. I have now changed container volume mapping to /config and the WebUI is again accessible. Thank you!! Now on to figuring out the server configuration.
  6. Deleted the template in /boot/config/plugins/dockerMan/template-user/my-OpenVPN-AS.xml; removed container (not image); added container again; problem persists and openvpn folder in appdata remains empty. As Macester said, permission issue in creating folder? Funny thing is I had no such issues with his June 13 build of the container. Issues just began with yesterday's build.
  7. Remove the container and image, and then clear the config directory (or choose another location). After that it should work and with the rewrite of the docker it should survive upgrades/rebuilds in the future. PS. Do an update in the Community Applications, it was rebuild and updated about an hour ago. Removed container and image, removed entire openvpn folder and subfolders in docker appdata share, reinstalled OpenVPN from Community Applications, set environment variables, rechecked port forwarding in router. Still cannot access WebUI. Here are some things you can check Can you reach the webui locally? ex, https://192.168.1.100:443 or https://192.168.1.100:943? What does the log say? if you click the little notepad icon to the right in the docker tab. Does the appdata/config/ folder pupulate with files/folders? should be two folders in there /config and /logs. BTW i think managed to reproduce the error, Are you useing something like "/mnt/user/appdata/openvpn/"? Try the disk directly instead "/mnt/cache/appdata/openvpn/", does that work? Host path is set to: "/mnt/cache/appdata/openvpn/" and has been on every install of container. However, in the openvpn folder there is NOTHING. It appears completely empty. The log file contains the repeated message: "./run : line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory" Cannot reach webui locally at https://[ipaddress]:443 or https://[ipaddress]:943 or htpps://[servername]:943?
  8. Remove the container and image, and then clear the config directory (or choose another location). After that it should work and with the rewrite of the docker it should survive upgrades/rebuilds in the future. PS. Do an update in the Community Applications, it was rebuild and updated about an hour ago. Removed container and image, removed entire openvpn folder and subfolders in docker appdata share, reinstalled OpenVPN from Community Applications, set environment variables, rechecked port forwarding in router. Still cannot access WebUI.
  9. Same error here OpenVPN container was working yesterday on the June 13 build. At least it was working as far as being able to access the WebUI and poke around in the setup. I had to remove the OpenVPN container today and reinstall it because of something I messed up and wanted to reset the whole thing from scratch. Ever since the reinstall, the WebUI is also inaccessible for me. Stop and restart container does not resolve problem. I am not seeing the other errors being reported here, but, I can't access the WebUI, I get "cannot establish connection the server at [ipaddress]:943" error. Nothing else is using ports 943 or 443.
  10. Thanks Trurl. This is the first container I have added that required variable editing so I was not familiar with that process.
  11. When I added the OpenVPN-AS container from the Community Applications plugin, I see the following docker run command was executed which sets the environment variables for usernames and passwords as the container is built: /usr/bin/docker run -d --name="OpenVPN-AS" --net="host" --privileged="true" -e ADMIN_PASS="changeme_webui_pass" -e VPN_USER1="changeme_vpnuser" -e VPN_PASS1="changeme_user_pass" -e VPN_USER2="changeme_vpnuser" -e VPN_PASS2="changeme_user_pass" -e TZ="America/Mexico_City" -v "/mnt/cache/appdata/openvpn":"/usr/local/openvpn_as":rw mace/openvpn-as How do I change the environment variables to the usernames and passwords I want as per the container description? Notes Environment Variables Define the variables, ADMIN_PASS (WebUI password),VPN_USER1 and VPN_USER2 (VPN user accounts), VPN_PASS1 and VPN_PASS2 (passwords for VPN user accounts). Running the above command again substituting the desired variables values will simply result in the creation of another container.
  12. I have a Kingston Mobilelite G2, Mobilelite G3 and Mobilelite G4. Of the three, only the G2 works and is my currently registered flash device with a unique GUID. The G3 shows up in the BIOS as a Generic USB 3.0 device (The G2 shows up as a Kingston Reader with both slots available to boot - only boots from SD slot), the G3 will not boot at all with my motherboard, so, I do not know if it has a unique GUID. The G4 also shows up as a Generic USB 3.0 device, boots fine from the microSD slot, but does not have a unique GUID and is, therefore, not able to be registered. It is a pity that a good compact flash reader cannot be found (other than the unavailable G2) as it is very, very useful when playing around with configurations and troubleshooting to just pop in a different SD/micro SD card in the G2 reader without having to do all the copying involved with changing configs on a "fixed" flash drive. I can't image Tom will quit supporting the G2. I suppose his statement to not use a card reader has to do with extreme difficulty in finding one that meets the boot/registration requirements. EDIT: Re: Kingston Mobilelite G3, I am an idiot (a fact established long ago). Running make_bootable actually does what it says it does, it makes the flash drive bootable! The Kingston Mobilelite G3 purchased just a few days ago in Mexico does in fact boot UnRAID from the SD card slot and it had a unique GUID which is now registered.
  13. I recently went through the reiserfs ---> xfs transition on my array as well. I got the same results you did, just a list of the contents of the first disk in verify.txt. To "verify" the contents, I ended up doing a directory compare on each disk from Windows to verify that the number of files and size counts were exactly equal. I also spot checked several files in each directory by comparing them, opening them (pictures, videos, movies and TV shows). The array has been in use for several days now since the transition and I have not yet encountered a problem file.
  14. OK, thanks. I have several shares restricted to certain disks and one that uses all current data disks. For example, I have a Pictures share that uses all the current data disks (disk1 - disk4). After the changes, would I need to set that to disk1, disk2, disk3, disk 5? I have a Videos share that uses disk2, disk3, disk4 so that would require a change to disk1, disk2, disk3, correct? And so on with all shares where the contents have moved to different physical disks? EDIT: Perhaps the easiest thing to do would be to change all shares to use all disks (at least the larger ones that already span multiple disks).
  15. OK, I have read through the instructions several times and it all makes perfect sense. I have five data disks in my array and a parity disk. Disk 5 was recently added (formatted as RFS) and is empty. All five data disks are 3TB WD Reds. I have formatted Disk 5 (it was empty as I had not added it to a user share yet) with XFS and am copying the contents of Disk 1 to Disk 5 as per instructions. My planned migration is like this since all disks are identical in size: Disk 1 ---> Disk 5 Format disk 1 XFS Disk 2 --> Disk 1 Format disk 2 XFS Disk 3 --> Disk 2 Format disk 3 XFS Disk 4 --> Disk 3 Format Disk 4 XFS Disk 4 then becomes the extra disk I can add to user shares as needed. My question is what does this do to user shares where I have the current disk 1 ... disk 4 assigned to the shares? Do I need to unassign the disks and reassign the "new Disk 1 (Disk 5)" as Disk 1, etc. after the changes? As you can see, I am not 100% certain about how physical disks, array disk assignments and user shares that use these disk are all related after making these changes. Of course, initial array configuration and share creation all makes perfect sense, but, I am unclear on what happens when contents are moved from one disk to another after changing file system.
  16. OK, I think I have this problem figured out and resolved. It is related to this thread: http://lime-technology.com/forum/index.php?topic=39237.0 Somehow in the creating and configuration of the Crashplan container, I ended up with a user share called ":" that contained an empty /mnt/user directory structure. That and the creation of appdata/Crashplan on disk1 kept disk1 and parity spinning as long as Crashplan was running. Just removing the Crashplan docker and docker.img was not enough. Once I cleaned up the mystery ":" share and recreated docker.img and Crashplan container, things appear to be working normally. Crashplan is currently synchronizing block information, but, Parity and Disk1 are not spinning.
  17. Unless I stop Crashplan, Parity and Disk1 are constantly spinning. In checking out my docker config and Crashplan config, I realized the appdata share had not been set to cache only and that, in fact, Crashplan program files were installed on Disk1/appdata with some files on Cache/appdata. I removed the Crashplan container and files, ran a rmdir -r appdata on Disk1 to get rid of it completely on disk1, verified the appdata share was set to cache only and reinstalled the Crashplan container. With Crashplan running, Parity and Disk1 continue to spin. They will not spin down and stay spun down unless I stop Crashplan. CrashPlan is setup per default config with /config set to /mnt/user/appdata/crahplan and /data set to /mnt/user. What have I done wrong? EDIT: Since Crashplan was my only installed docker so far, I deleted it and the docker.img file. At least with Crashplan and docker completely gone, I can manually spin down disks and they stay spun down (Parity and Disk1 would spin back up with Crashplan installed). I have no idea why Parity is spinning up in the first place since I am not writing to the array. All disks eventually spun down per disk settings. I"ll check all settings before reinstalling Crashplan docker. I am first trying to eliminate everything else as the cause.
  18. I set the container volume to /mnt and selected backup locations based on share names rather than disks (didn't show anyway). This caused Crashplan to warn me that I was removing files from the backup archive and that they would be permanently deleted. I went ahead and confirmed thinking I was about to delete my entire 2.8tb worth of backup and start over. However, Crashplan recognized the files as already backed up without a change in folder structure and simply rescanned existing files (still took many hours) and backed up a few new files that had been added. After about 18 hours it was done.
  19. I upgraded yesterday from v5.06 to v6b14b. Upgrade went very smoothly with no issues. I am now trying to setup Crashplan docker. I have ~2.8TB data already backed up to Crashplan Central. When I setup the Crashplan container with /data path set to /mnt/user/ and adopted the previous backup set, the Crashplan GUI shows all backed up files as "missing." I have pictures, videos, movies etc. to backup and each of these shares spans 2 or more physical disks. In my prior configuration in the Crashplan GUI I had to expand DISK1, DISK2, DISK3, etc. and select the corresponding directories that contained the data to be backed up for each backup set. With /mnt/user/ specified as data path, the disks and directories do not expand in the same fashion (in fact, I can not drill down to directories that are part of the share in this way) and everything is reported as missing. The /data node expands to the share names, but if I include these, Crashplan wants to remove everything I have already backed up and start over as it is a path change and thinks everything is "new.". Generalz responded that he had set /data to /mnt/user/:/mnt/user/:rw to have Crashplan continue backing up existing data sets. When I do this, Crashplan will not install successfully and the container is removed saying this is an invalid path. I tried adding /data paths for each of the physical disks. The result was the same as setting /data to /mnt/user/ Any ideas for proper /data configuration that will allow Crashplan to continue on with prior configuration? I don't use this docker, but it sounds like you and the people you are quoting are confused about volume mappings. Maybe post what you have for volume mappings and we can help figure it out. It also sounds like you must have had Crashplan set up before to use disk shares instead of user shares. You can't get to the disk shares from /mnt/user. Maybe try /mnt instead. Yes, I am sure I am confused, happens quite often! Here is the Crashplan Docker configuration with /data set to /mnt. It shows the physical disks in the array and looks like it should work: Below is how Crashplan sees things. Note that I had to drill down under physical disk names to select backup folders. Perhaps this was simply an error on my part in setting the original backup set folders and I could have/should have done it under user shares (don't recall if that was something I could have done a couple of years ago when I defined these). Under Mnt it only shows disk1 and disk 2 and they are not expandable. I see the disk and share names under the data node, but, selecting this causes Crashplan to discard the prior backup files and start over. Under User I see the UnRAID share names and this is probably the preferred way of selecting files to backup as it is not physical disk dependent, but, this also results in Crashplan wanting to discard prior backup. As seen in the Crashplan GUI, the Pictures share is currently storing files on disk1 and disk2 (although in UnRaid config it can span disk1..disk4; Videos is on Disk2 and Disk3 (also set to disk2...disk4) and Movies and TV are both on Disk 4. Disk 5 was recently added and currently has nothing on it.
  20. I upgraded yesterday from v5.06 to v6b14b. Upgrade went very smoothly with no issues. I am now trying to setup Crashplan docker. I have ~2.8TB data already backed up to Crashplan Central. When I setup the Crashplan container with /data path set to /mnt/user/ and adopted the previous backup set, the Crashplan GUI shows all backed up files as "missing." I have pictures, videos, movies etc. to backup and each of these shares spans 2 or more physical disks. In my prior configuration in the Crashplan GUI I had to expand DISK1, DISK2, DISK3, etc. and select the corresponding directories that contained the data to be backed up for each backup set. With /mnt/user/ specified as data path, the disks and directories do not expand in the same fashion (in fact, I can not drill down to directories that are part of the share in this way) and everything is reported as missing. The /data node expands to the share names, but if I include these, Crashplan wants to remove everything I have already backed up and start over as it is a path change and thinks everything is "new.". Generalz responded that he had set /data to /mnt/user/:/mnt/user/:rw to have Crashplan continue backing up existing data sets. When I do this, Crashplan will not install successfully and the container is removed saying this is an invalid path. I tried adding /data paths for each of the physical disks. The result was the same as setting /data to /mnt/user/ Any ideas for proper /data configuration that will allow Crashplan to continue on with prior configuration?
  21. Thanks for this. I too need a sleep script for version 5.0 final since it disappeared from the main page in the new interface. I was using SF prior to upgrading to v5.0 final, but, SF has display problems with this release. I know some just leave their serving running 24x7; I do not want to do that as it is lightly used right now. I have modified the unmenu sleep script on the user scripts page such that it now works properly with v5.0 final and I now have a manual sleep button that works great. With your modified Bagpuss auto sleep script, do you put it in your go file so it is active on boot up or do you run it elsewhere? Just curious as to how you are invoking it. I am a total Linux script noob so, I am trying to wrap my head around what the the script is actually doing as far as checking NIC activity and the amount of inactive time before sleep is invoked, but, I very much appreciate the work you and Bagpuss have done on this.
  22. Thanks, I have downloaded wolcmd and modified it with the MAC address of my NIC and the appropriate IP and subnet mask addresses. Even though the NIC supports WOL, it appears my BIOS may not; however, my concern at the moment has more to do with automating the server sleep function before I move on to waking it up. Speeding_Ant's Simple Features Sleep button works great, but, I don't know the code behind it to try it in a script.
  23. The nature of my unRAID server is that it is (for the moment) strictly a media server. I store photos, videos, movies and music on it that does not always need to be accessed. I want to put my unRAID box to sleep automatically after the disks spin down and wake it again when it is needed. I am running v5 beta 14. The sleep script on the wiki page uses "echo 3> /proc/acpi/sleep" which does not put the server in an s3 sleep state with the latest kernels. I have Simple Features installed and the "Sleep" button works perfectly. Is there anyway to modify the sleep script to use whatever method Simple Features is using to sleep the server 5 minutes after disk spin down? This question seems to have been asked before, but, I do not see a definitive answer in this thread. It is likely this has been answered and I just missed it. If so, I apologize in advance for the reading comprehension fail. After I get the sleep portion reliably working, I'll tackle the WOL. Right now, I can successfully wake the server by pressing the blinking power button, by specific keyboard command or a specific time of day. Eventually, the preferred method is WOL though a magic packet. ethtools eht0 shows that "pumbg" is supported and wake is set to "g" so this indicates the NIC supports WOL, correct?
  24. Just to add a clarification, because it may be confusing to some that a drive seems to be working fine, yet the SMART report says it has FAILED. Part of the idea behind the development of the SMART system is to try to alert users to imminent failure BEFORE it is too late to save data. When a drive indicates a SMART failure, it is trying to warn you that there is a very high probability of complete drive failure in the very near future. The drive may or may not be fully operational at this moment, but even more catastrophic failure is very possible very soon. If there is any important data on the drive, you should attempt to relocate it as soon as possible. OK, thanks for the detailed response, I really appreciate it. Since this is a brand new drive and is already in pre-fail, I will return it for a new one.
×
×
  • Create New...