langrock

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by langrock

  1. Those two variables still exist. What appears to be gone is the separate key pair used to access the web GUI. Right now it defaults to the ROOT access and secret key. You can define users and group within the web UI, though, which may then also allow those users with their separate keys to back up to the server, but I haven't tried that, yet. There's probably some document that explains how to generate compatible keys in case these need to fulfill certain requirements.
  2. Fantastic, works again as advertised both for Arq and Duplicacity. Nice feeling to have the backup running once again!
  3. Thanks for looking into this. Would be super if this could be resolved. I am wondering if the container actually uses the as-defined access and secret keys. Screenshot attached. Please let me know if there's anything I can help with or if you see anything I should change about the configuration. Thanks
  4. Looks like I am not the only one. I can no longer connect to Minio from Arq or duplicacy (CLI). I can still log into the Minio Web UI after having set the root username and password. It does show the previous buckets and such, so no data is lost. When trying to connect from Arq, I get From duplicacy Mind you, no changes have been made and the access and secret key are the same as always. I changed docker network type to 'Host' to get access to the GUI working again. Any suggestions about what to try next would be greatly appreciated. This used to work so well, it'd be a real bummer if I had to move to a different backup system:( Thanks
  5. I can no longer renew the certs and am getting the following error message. I have changed absolutely nothing on either the server or the router in many years. Any idea if a recent update to the letsencrypt docker might be causing issues? The only web server running is the one serving the Unraid GUI ... this has not been an issue in the past. Thanks Update: I checked that the port forwarding worked and that I am able to access the apps I am linking to from the outside world, jellyfin and calibre-web in my case, and both still work just fine. The container log doesn't indicate any problems or warnings, but running 'certbot renew' still throws the above error. I am mystified.
  6. Works like a champ! Thank you very much.
  7. I am having the same problem as @Enkill with the S3 script. After upgrading to Unraid 6.8.3, I couldn't even start the server anymore. It redirected to http://server_address/update.htm when hitting the Start button; Reboot button redirects to http://server_address/webGui/include/Boot.php, and does not reboot either. There was also the same error message he is showing at the bottom of the Main tab. Removing the S3 plugin restored the normal functionality, but I really would like to be able to put the server to sleep since I only use it sporadically and cannot justify burning up the power to keep it on all the time. If someone could recommend a solution, that would be awesome! Re-installing it does not do anything for me either. The Sleep button doesn't even show up. So, for now, I suppose it's best to remove it and shut the server down until this can be fixed. Is there another way to sleep the server? Maybe over the command line? Thanks
  8. There must be some sort of API already, otherwise how does Margarita , available for macOS, do this? Hadn't heard about ControlR, will check out.
  9. Thanks. I'll keep an eye on the error count in case it shoots up like reported by some forum members when a drive actually goes south hard.
  10. Came across this post when looking for the significance of reported disc errors. Yesterday, I saw that one of the discs reported 240 errors, see attached image. I had run a parity check before, which didn't turn up any errors, and then also a full SMART test, again, allegedly without showing errors; the report is attached as well. My question, and I hope someone more experienced than I can answer this, is whether or not this disc needs to be replaced right away. Also, in that case, and I haven't had to do this, how difficult is it to replace a disc and restore the array? I am imagining that one would somehow hook up the disc and run the preclear.sh routine before pulling the old disc and installing the new one in its place. Thanks in advance for any advice, Carsten WDC_WD10EADS-00L5B1_WD-WCAU49184354-20160918-0824.txt
  11. Thanks. Followed the [this link][http://lime-technology.com/forum/index.php?topic=38323l] which was mentioned in the FAQ and it appears to have worked, at least for the official Plex docker image. Everything seems to be in its place. Now I'll see if the Crashplan docker can also be restored ...
  12. I am getting the same error after updating to Unraid 6.2. I read that removing the app and re-installing it would help, but now I am getting the same error when trying to re-install it from the community apps page. I am kinda getting worried since I put a lot of effort into maintaining the Plex server's database. I hope that removing the docker image didn't nuke the whole thing. Thanks, Carsten
  13. I am having trouble installing this Docker image. I am getting the following error messages. Maybe those mean something to somebody on this forum and I'd appreciate a hint on what to try next. I tried both directly installing the Docker image as well as going through the Community Applications page. No luck, me sad:( --------------------------------------------- Pulling image: gfjardim/crashplan:latest IMAGE ID [2d02fe93d96e]: Pulling image (latest) from gfjardim/crashplan. Pulling image (latest) from gfjardim/crashplan, endpoint: https://registry-1.docker.io/v1/.'>https://registry-1.docker.io/v1/. Pulling dependent layers. IMAGE ID [f7eef3e8d2a5]: Pulling metadata. Error pulling dependent layers. IMAGE ID [2d02fe93d96e]: Error pulling image (latest) from gfjardim/crashplan, endpoint: https://registry-1.docker.io/v1/,'>https://registry-1.docker.io/v1/, HTTP code 400. Error pulling image (latest) from gfjardim/crashplan, HTTP code 400. TOTAL DATA PULLED: 0 B Command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="CrashPlan" --net="host" -e TZ="America/Los_Angeles" -p 4242:4242/tcp -p 4243:4243/tcp -p 4280:4280/tcp -p 4239:4239/udp -v "/mnt/cache/appdata/CrashPlan":"/config":rw -v "/mnt/user":"/UNRAID":rw -v "/mnt/user/Backup/":"/backup":rw -v "/mnt/disks/":"/unassigned":rw gfjardim/crashplan Unable to find image 'gfjardim/crashplan:latest' locally Pulling repository gfjardim/crashplan 2d02fe93d96e: Pulling image (latest) from gfjardim/crashplan 2d02fe93d96e: Pulling image (latest) from gfjardim/crashplan, endpoint: https://registry-1.docker.io/v1/ 2d02fe93d96e: Pulling dependent layers f7eef3e8d2a5: Pulling metadata f7eef3e8d2a5: Error pulling dependent layers 2d02fe93d96e: Error pulling image (latest) from gfjardim/crashplan, endpoint: https://registry-1.docker.io/v1/,'>https://registry-1.docker.io/v1/, HTTP code 400 2d02fe93d96e: Error pulling image (latest) from gfjardim/crashplan, HTTP code 400 Error pulling image (latest) from gfjardim/crashplan, HTTP code 400 The command finished successfully! --------------------------------------------- Thanks, Carsten
  14. I spent some more time with the system tonight. For good measure, I reset the BIOS by removing the battery for a couple of hours while the system was unplugged. No luck. As of tonight, I cannot even use an external USB keyboard to get into the BIOS, but have to use a PS2-style one. Looking at the num-lock light of an attached USB keyboard, it appears that the power to the USB ports - both front panel and back panel - is being established too late in the boot sequence for the computer to read the data from the flash drive. I am even suspecting that the computer cannot source enough current to properly operate a flash drive; the flash-drive’s LED is pretty dim when connected and I cannot install an updated BIOS firmware from a third flash. This is all pretty strange to me, and there are two things I’d like to try next. First, use a powered USB hub, and second, use a USB charger doctor pass-through. The computer itself seems to be working and I got it to boot from a Linux Live CD w/o apparent problems. If someone has additional ideas about what might be going on, I’d appreciate your help, but I realize that this has now moved outside of the scope of this forum. It would seem kinda sad to rip the motherboard out just because the on-board USB interface isn’t working. Which reminds me, maybe one could try a USB PCI card … might have one of those flying around at work. Thanks again for the helpful suggestions. Carsten
  15. The motherboard doesn't have any USB3 ports, so we can rule that out, and the drives themselves are USB2 ... and at least the original one registered with unRAID in 2010 worked ever since. Good point about the battery. I will do a little experiment, setting the time and then unplugging the box from the outlet. Other modes are USB FDD and USB CDROM. When I checked the BIOS last night before loading the default settings, the boot sequence was set to CDROM (first), USB HDD (second), disabled (third). I can try that once more if nothing else works. Do you think that a dead battery could confuse the board? I always thought it was only there to keep feeding the RTC when the machine is unplugged. Thanks, Carsten
  16. Yes, I hooked up a monitor after booting didn't appear to complete on two separate USB sticks. Another explanation is that the motherboard is confused. After nothing appeared to work, I went into the BIOS settings and loaded up the boards defaults and changed the boot sequence to USB HDD first, with the second and third option being disabled. The SATA interface is set to emulate Native IDE, though I think that this shouldn't make a difference this early in the boot sequence. When the box boots up, it displays "Verifying DMI pool ...", followed by "Boot Error". Sometimes it cycles through the boot process a few times until it finally hangs after the "Verifying DMI pool ...". I used a Windows box at work to create two flash drives with a fresh install of 6.0.1 and tested them on that same PC. At least that computer appeared to be able to boot just fine. I'll try these once more when I get home. If these fail at home, that should indicate some issues with the board or BIOS settings. That computer had been running unRAID for many years w/o a problem, so that's kinda odd. Thanks, Carsten
  17. Matt, Have you ever figured out what happened? I am in the same boat. V5 working fine, tried to upgrade to 6 and now I am getting boot error after POST on two different USB sticks, freshly formatted and made bootable. I am now trying to copy back the old flash contents, but I am doubtful it'll go back to working. Thanks, Carsten
  18. Hi, I noticed that the cache_dirs process will ignore the folder(s) specified in the 'Excluded folders (separated by comma):' section of SimpleFeatures GUI of unRAID 5.0.4. For example, I have a 'Plex" share which lives only on the cache drive. I had noticed that whenever cache_dirs was looking into this directory the CPU usage went up by a lot, >100% on a dual-core AMD machine. I was hoping that instructing cache_dirs to ignore this directory would work, but alas, I still see the process looking into this directory by issuing ps fo pid,cmd -U root on the command line at the right moment. The processes in question are /bin/bash /usr/local/sbin/cache_dirs -w -B -m 10 -M 600 -d 9999 -e 'Plex' 27607 \_ /bin/bash /usr/local/sbin/cache_dirs -w -B -m 10 -M 600 -d 9999 -e 'Plex' 27636 \_ find /mnt/cache/Plex -noleaf which does have 'Plex' as argument for the -e option. I also set the scan interval to 10 seconds since I didn't understand why this process would need to be run every second. Maybe I don't understand how to properly use the exclude function. I would appreciate helpful comments. The share I'd like to exclude is '/mnt/cache/Plex'. I had also tried to enter the full path name in 'Exclude folders' section with no success. Thanks in advance, Carsten
  19. Hi, I am running the latest version of unRAID (5.0.4). After replacing my aging cache drive with a small SSD, I looked at write / read speeds directly to the cache drive and to a share which has the cache drive enabled. Both were mounted via AFP (which seemed faster than SMB on my Mac running 10.7). I used Blackmagic Disk Speed Test and found that I could get 70 / 96 MB/s write / read performance when targeting the cache drive, but only 30 / 96 MB/s write / read when targeting the cached share. I had assumed that there should not be any difference, since anything written to the share will go to the cache drive. Can someone please shed some light onto this reduced write-speed behavior? I was also surprised to see that the AFP access was significantly faster than going through SMB; 32 / 49 MB/s write / read to the cache drive and 26 / 42 MB/s write / read to the cached share when mounting via SMB. I'd love to hear more about how to increase the write performance. I've cranked up the MTU on the unRAID box as high as I could, and matched the Mac's MTU to that number, 6122. Both machines are connected via 1G ethernet through a switch which supports the increased frame size. Thanks in advance, Carsten
  20. Seems that this indeed did the trick. I can now perform a backup operation to an external drive using rsync, just as it was possible in the good old days of 4.x. Thanks so much for hunting down this annoying bug. I'll test next that the Crashplan module works again as it used to as well.
  21. Just upgraded to 15a and tried rsync on my music directory to back it up to an external drive. Same problem still persists. Early during the rsync process, the endpoint is disconnected error message gets thrown and only a reboot of the system can make the directories appear again. Not sure what else to do here. Being able to use rsync and Crashplan was essential to how I used to use unRAID under version 4.7.
  22. As you might have seen, I've tried the 'ulimit' setting in the Go script already and it didn't have any effect. I'll next try the sysctl fs.inotify.max_user_watches=20000 idea from Tom to see if I can get rsync to back up my music directory. Will report back. /I think you need both an increased number of open files possible, and an increased number of user-watches. I've got my max_user_watches set to 100000 Joe L. Didn't make a lick of a difference to have both the ulimit and fs.inotify.max_user_watches changed in the Go script. Rsync throws an Transport endpoint is not connected (107) error message pretty early on when trying to create the sync list and this causes the shares to disappear. No other service was running at the time, and the system had gone through a fresh reboot. I'd like to help out here, since I can easily reproduce this error. Let me know what other information or log file I could provide to trouble shoot this issue. Thanks, Carsten
  23. As you might have seen, I've tried the 'ulimit' setting in the Go script already and it didn't have any effect. I'll next try the sysctl fs.inotify.max_user_watches=20000 idea from Tom to see if I can get rsync to back up my music directory. Will report back. /I think you need both an increased number of open files possible, and an increased number of user-watches. Joe L. I have both mods in my Go script and am booting as we speak. In case it matters, both mods are called before start of the management utility via emhttp.
  24. As you might have seen, I've tried the 'ulimit' setting in the Go script already and it didn't have any effect. I'll next try the sysctl fs.inotify.max_user_watches=20000 idea from Tom to see if I can get rsync to back up my music directory. Will report back.
  25. I'd also like to know how to permanently change the open file limit via 'ulimit'. It seems that the 'ulimit -n 20000' command in the Go script is being ignored, since issuing 'ulimit -Hn' on the command line results in an answer of 4096. I'm pretty sure that the whole number-of-open-file limit has something to do with many of the problems that people are having. The current setting for the soft limit on my system seems to be 1024, just like for everybody else. Rsync seems to want to open more files than the system allows, and hence causes Unraid to crash; the computer doesn't crash, but the service goes away. I'd love to have this problem resolved soon, so I can go back to using my server the way I was under Unraid 4.x and won't be forced migrating to a commercial NAS, which would be a pain in the neck. Thanks, Carsten