shirosai

Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

shirosai's Achievements

Noob

Noob (1/14)

2

Reputation

  1. This has been fixed in the latest lsio release, or you can shell into the diskover container and in the diskover-web rootdir, copy customtags.txt.sample to customtags.txt.
  2. Mount the remote host with either smb/cifs or nfs and use diskover's -d <path> cli option to crawl that mount point path. You can also use fuse for s3 and other cloud storage. Treewalk client is just additional crawl option for diskover, but not required to crawl remote hosts, you can just use diskover.py. The linuxserver.io container is using /data as the mount point that get's crawled when the container runs. You can always point /data at something else or you can shell into the container and run diskover.py.
  3. Mount the remote host with either smb/cifs or nfs and use diskover's -d <path> cli option to crawl that mount point path. You can also use fuse for s3 and other cloud storage. Treewalk client is just additional crawl option for diskover, but not required to crawl remote hosts, you can just use diskover.py. The linuxserver.io container is using /data as the mount point that get's crawled when the container runs. You can always point /data at something else or you can shell into the container and run diskover.py.
  4. The missing reference to qumulo is from an issue with the lsio container right now since the newest config has been released before the latest version of diskover which removes qumulo code. I just released v1.5.0-rc30 for diskover, when that gets packaged and released by lsio this issue will be gone. For now you can roll back to previous build of diskover lsio docker hub image or wait until the next release which will include v1.5.0-rc30 or add a section in your diskover.cfg file with [qumulo] header as a temp fix.
  5. For anyone always getting "diskover-" as the index name, if you leave that param INDEX_NAME empty or remove it, it should set it to diskover-<date>. At least that's what I believe @hackerman-lsio set it to do...
  6. I think @hackerman-lsio hasn't released it yet... maybe someone can write one for Unraid to help all the unraid guys out? I don't use unraid...
  7. diskover and diskover-web don't delete files automatically. The tagging is for managing your files, from there you can use the diskover-web rest api or export file lists from diskover-web to run cleanup/archive scripts against those tagged files/dirs.
  8. The crawlbot section is separate from the cron scheduling in the container. The crawlbot section isn't used unless you are running diskover.py manually with the --crawlbot cli arg. To update your index daily, you'll need to schedule it inside the container using cron.
  9. Try v1.5.0-rc28-ls14 when released later today or tomorrow and let me know if all working good. I pushed a pr into the lsio container to update the default cfg with the container and @hackerman accepted it so that will be in the new container builds now.
  10. Have you tried to bash into the container and running it from there? DISKOVER_OPTS would be the right place to to put it at container startup but you also need to specify the index name you want to run finddupes on using -i <index>. I think there is an issue with DISKOVER_OPTS not getting set in the diskover container in Unraid, can anyone verify this? Try adding -w 20 to WORKER_OPTS and see if 20 bots start up.
  11. I've sent a message to @hackerman to add them in. I think the config in the lsio github for diskover is old and it uses that instead of getting new versions from diskover github when building.
  12. Try what I wrote at the bottom and let me know if it works, you can also just install python2 or python3 in your windows host with gource and install the elasticsearch 5 python module using pip, then you don't need to ssh or run anything in docker container and just pull the es index data right from the windows host that has diskover github files on it. https://github.com/shirosaidev/diskover/wiki/Gource-visualization-support
  13. copy over any missing sections from diskover.cfg.sample to diskover.cfg
  14. @Wuast94 https://github.com/shirosaidev/diskover/wiki/Gource-visualization-support has some information at the bottom about getting gource data out of diskover onto another machine. You will need ssh access to the diskover docker container. Windows needs to be running a bash shell with gource installed and diskover github files to run that diskover-gource.sh script on the windows machine. The | (pipe) is redirecting the ssh output of the diskover.py python command (you may need to set this and python to absolute paths) running on the diskover docker container to your windows machine.
  15. It looks like there is a new built v1.5.0-rc25-ls10 image on docker hub. Please try it out and reply back if working okay. Thank you.