scloutier

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by scloutier

  1. I got that error when trying to put a very short password (admin). put in something longer and it works...
  2. reviving an old post, but in case someone's interested, I have a TS-673A, I got a compatible thumbdrive, made it into an unraid disk, poped it in my qnap front usb port, it booted right up from it. unraid seems to work out of the box, still testing things, but so far so good
  3. ok, so using cat -v filename actually shows where the issues are .. in the upload script, issue was inside two blank lines. will go through the other script. really weird issue. edit seems like going through and removing any broken portion and retyping worked.
  4. so clearly this has something to do with utf-8 encoding. digging to find a solution.
  5. right .. but even with downloading the file from the raw github page directly in my unraid terminal and renaming it does the same. no copy/paste involved ..
  6. alright .. I used VI to paste the script. same issue, running out of idea .. will try to wget the file directly from the shell and rename see if that works, however running out of ideas. so what's weird is that part of it works .. /tmp/user.scripts/tmpScripts/rclone_gdrive_upload/script: line 4: $'\357\273\277': command not found 13.03.2019 15:00:01 INFO: rclone installed successfully - proceeding with upload. the first if file check for the rclone_upload fails .. the second one that looks for mountcheck seems to work
  7. I tried to copy both from github in raw directly into putty, and also tried to paste it in the unraid config editor. results are the same. will see if I can download it directly from the shell then. I'm also confused as the if man page only shows single bracket (https://ss64.com/bash/syntax-file-operators.html) their example: Check if a file exists: if [ -f ~/some-file ]; then Echo the file exists fi which also has no " " around the file path" thanks
  8. I am also experiencing similar issues. Any operation that involves updating/starting/stopping docker container takes forever now. Log also shows lots of upstream timed out error. Reboot didn't fix, disabled most of my docker to see if I could isolate to a culprit, removed some plug-ins (that have been there for a long time). nothing seems to help. this started somewhat recently (couldn't say when exactly).
  9. ok, so I got everything configured .. however when I run the upload script I get the following: Script Starting Tue, 12 Mar 2019 22:35:24 -0400 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_gdrive_upload/log.txt /tmp/user.scripts/tmpScripts/rclone_gdrive_upload/script: line 4: $'\357\273\277': command not found /tmp/user.scripts/tmpScripts/rclone_gdrive_upload/script: line 26: $'\357\273\277': command not found 12.03.2019 22:35:24 INFO: rclone not installed - will try again later. Script Finished Tue, 12 Mar 2019 22:35:24 -0400 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_gdrive_upload/log.txt I also get this when running the cleanup script: Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_gdrive_cleanup/log.txt 12.03.2019 22:44:55 INFO: starting unionfs cleanup. find: `/mnt/user/mount_unionfs/google_vfs/.unionfs': No such file or directory find: `/mnt/user/mount_unionfs/google_vfs/.unionfs': No such file or directory Script Finished Tue, 12 Mar 2019 22:44:55 -0400 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_gdrive_cleanup/log.txt both the mount and unmount script seems to run fine. any advice ? thanks edit ------------------------------ so if I just change the entries to remove one of the bracket set, (if [ -f "/mnt/user/appdata/other/rclone/rclone_upload" ]; then) script seems to run.
  10. Hi, I created a few VLANs to isolate my VMs and Dockers from main network. I added my vlan in the unraid network config and assigned a static IP address+gateway in my docker configuration I can enable my customer network on the new interface (br0.30). However the gateway field is blank for anything but my br0 interface If I put two dockers in that network segment, they can ping each other without issues, however they can't seem to ping anything else, not the IP assigned to the br0.30, not my .1 default gateway on which my router sits. getting confused as to what I'm doing wrong. update --- I manage to get the gateway field to populate by removing the unraid IP address in that br interface (none). however things are the same, I can only ping other dockers inside the segment. as a note, my other vlan (br0.40) on which I have VMs works as it should. thanks Simon
  11. seems like wget in the maria-websql docker is no longer happy about the sourceforge.net certificate and because of it it's failing to start. This should be a rather easy fix, think it can be done ? thanks
  12. ok submitted the fix to the echo issue to github. Hopefully this will be the last of it.
  13. so the splunk home works. But looking at the .conf file itself it might be missing a CR between the splunk_os_user and the optimistic flag. # If SPLUNK_OS_USER is set, then Splunk service will only start # if the 'splunk [re]start [splunkd]' command is invoked by a user who # is, or can effectively become via setuid(2), $SPLUNK_OS_USER. # (This setting can be specified as username or as UID.) # # SPLUNK_OS_USEROPTIMISTIC_ABOUT_FILE_LOCKING = 1
  14. waiting for docker to finishing building and will give it a try.
  15. in all cases I still have this issue in my log .. homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem.
  16. on github I see this in the dockerfile: RUN echo "OPTIMISTIC_ABOUT_FILE_LOCKING = 1" >> $SPLUNK_HOME/etc/splunk-launch.conf
  17. Greetings, Was trying to get splunk running however I ran into the BTRFS issue. The latest splunk version does have proper support for it. Any chance you could update the docker ? ref: http://docs.splunk.com/Documentation/Splunk/6.3.0/Installation/SystemRequirements#Supported_file_systems thanks a lot ! Simon