blahblah0385

Members
  • Posts

    33
  • Joined

Posts posted by blahblah0385

  1. On 9/15/2020 at 2:50 PM, rob_robot said:

    For those that are stuck like me on the ingest-attachment plugin issue:

     

    You need to stop the elasticsearch docker and restart it after you have executed the command to install the plugin so it gets loaded into elasticsearch. 

     

    Here my steps:

    1.) get elasticsearch docker (7.9.1 works) do a clean install (delete old elasticsearch in /mnt/user/appdata/)

     

    2.) Download the full text search packages in nextcloud app store (at least 3 packages)

     

    3.) Configure your Nextcloud search platform to "Elasticsearch" and address of Servlet to: http://YOUR_IP:9200/ 

    It needs to be configured to the port of the REST API

     

    4.) Install the plugin for elasticsearch, by either opening a console inside the elasticsearch docker and type /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch ingest-attachment

    OR 

    through the User scripts Unraid plugin as stated above. 

     

    5.) Restart the elasticsearch container

     

    6.) Test everything by opening a new shell in the nextcloud container then navigate to the occ directory (/var/www/html) and type 

    ./occ fulltextsearch:test

     

    If everything is ok, then you can continue with the index: ./occ fulltextsearch:index

     

     

    This worked great with Nextcloud 20.0.2 and ElasticSearch 7.9.3. Do I need to re-index manually each time files are added? To get OCR working for PDF files I installed Tesseract OCR. Will that start running OCR automatically on all files once enable or needs to be run manually?

  2. I’m bringing up an old topic regarding Full Text Search in Nextcloud 20. I installed it along with the supporting apps including ElasticSearch. I installed the docker for ElasticSearch on my unraid server as well as the custom script as outlined in the docker install page. I also setup a reverse proxy for the ElasticSearch docker so it can be access in Nextcloud. Used the default user/pass of elastic/changeme. Still no go. Any ideas??

  3. Hi I have Nextcloud setup with this docker and it works great. Is there a way to mount as a network drive the Nextcloud data folder in OSX without using WebDAV. WebDAV is very slowly when uploading larger files with file write errors. I know I can upload via browser but would be nice to have mount drive access. Would prefer not to sync all the files to my local hard drive via sync client app.

     

    Also, any tutorials for setting up full text search??

     

    Thank you! 

  4. 2 hours ago, aleary said:

     

    I had a domain name already, so I haven't tried it with a DDNS domain name. 

     

    From doing a quick search, it seems that it is possible to use LetsEncrypt with sub-domains from DDNS services. It looks like there were problems in the past with limits on the number of certificates issued per domain, but as long as the domain used is on the public suffix list (https://publicsuffix.org), that shouldn't be a problem.

     

    Some of the guys on the LetsEncrypt docker forum may be able to provide more info on that type of setup.

     

     

    Ok thanks I will give it a try.

     

    Noob question. I plan on following the guide here: https://www.linuxserver.io/2017/05/10/installing-nextcloud-on-unraid-with-letsencrypt-reverse-proxy/

     

    Do I need to install Apache? Doesn't seem like I need to with LetsEncrypt and reverse proxy.

     

    How do you setup DNSMasq? I don't see an app for it. Is it just a script? thanks. 

  5. 4 hours ago, aleary said:

     

    I'm not quite sure how to fix your specific setup, i.e. using AirVPN, etc., but I've got mine set up so I can use the same URL whether internally, externally or over VPN.

     

    Basically, I'm using the LetsEncrypt docker to provide SSL and Reverse-Proxy for Nextcloud and other dockers. My router is then configured to forward port 443 from outside to the LetsEncrypt docker, which in turn proxies connections to the NextCloud docker. Internally, I'm using DNSMasq for DNS and to override the external hostname with the IP address of the LetsEncrypt docker.

     

    In all cases I'm connecting to the URL https://nextcloud.mydomain.com:443/ which connects to the LetsEncrypt docker. I don't connect to NextCloud directly.

     

    So, from outside I connect to https://nextcloud.mydomain.com:443/  which the router forwards to the LetsEncrypt docker at https://192.168.x.x:443/. This then proxies on to the NextCloud instance at https://192.168.x.x:943/

     

    On my local network, DNSMasq is configured to resolve "nextcloud.mydomain.com" to "192.168.x.x", which means that I can use the same hostname to connect to the LetsEncrypt proxy internally.

     

    When connected over VPN, I have DNS configured to resolve over the VPN connection, so this works as if I were on the internal network, again using DNSMasq to provide the internal IP address for "nextcloud.mydomain.com".

     

    Hope that might give you some ideas.

     

    /Alan.

     

     Did you purchase a domain name in order to set this up or were you able to use a dynamic DNS service like no – IP.com 

  6. Hello all, I asked for assistance regarding this issue before but have not been able to get it figured out. Basically my unRAID server is behind a VPN (AirVPN) running on my Asus Merlin Router with OpenVPN and everything behind the router goes through the VPN. I am trying to figure out a way to have a single address where I can access my Nextcloud whether it is remotely vs at home instead of having to switch from remote address to local IP address. Not sure if I must setup a reverse proxy or purchase a domain to redirect in order to do this. Any assistance would be appreciated. Please is my setup. Thank you in advance!!

    I am trying to setup access to Nextcloud on my home server running on unRAID. I currently have the local IP of next cloud as:

    https://192.168.1.75:443

    This port has been forward on AirVPN port forwarding as TCP/UDP 3800 to local port 443.

    I have the following nat-start script on my ASUS router running Merlin in /jffs/scripts:

     

    iptables -I FORWARD -i tun11 -p udp -d 192.168.1.75 --dport 443 -j ACCEPT
    iptables -I FORWARD -i tun11 -p tcp -d 192.168.1.75 --dport 443 -j ACCEPT
    iptables -t nat -I PREROUTING -i tun11 -p tcp --dport 443 -j DNAT --to-destination 192.168.1.75
    iptables -t nat -I PREROUTING -i tun11 -p udp --dport 443 -j DNAT --to-destination 192.168.1.75


    The router has a static IP set for 192.168.1.75. 

    Also there is a setting under OpenVPN client in the router to "Redirect internet traffic" for which I have selected Policy rules and set 192.168.1.75 to be directed through the VPN instead of WAN.

    I can easily access Nextcloud via https://****.airdns.org:3800.

    However when I try to access internally, on local WiFi, via this address it times out. I tried to setup a forwarding through no-ip.com, eg., https://****.ddns.net redirected to AirVPN-exitIP:3800 but that doesn't seem to work either.

    Any ideas how I can get this to work both externally and internally through a single address? Do I need to setup a relay?

    Thank you!!

  7. 3 minutes ago, billium28 said:

    I am trying to add existing shares into the Nextcloud system. I believe I have installed Nextcloud correctly but when I go to add external storage I do not see what to fill in on this places.  

     

    I have basically 2 shares in my Unraid, Media, and Home Media, that I want Nextcloud to point to. I don't want to start moving my existing files to a new Nextcloud folder. I just want to open Nextcloud and see my files. Maybe I can't do this with this software but I am really sure I can. I want to use Nextcloud as a personal small scale Google Photo's.

    Check out the attached picture. This is done under the Nexcloud Docker. Hit edit and click Add another path at the bottom. Follow the picture below. Then enable External Storage in Nextcloud and add "Local" storage. Under Folder Name you name it anything like "Media" and under Configuration type /Media. It should turn green.

    Screen Shot 2017-05-10 at 10.10.20 PM.png

  8. Hi followed Spaceinvader One online video for "How to use rclone in unRAID Copy sync and encrypt files to the cloud. Even stream media". It works fantastically well. I'm having an issue mounting it to be accessible on the local network via SMB. I used this script:

     

    [secure-cloud] path = /mnt/disks/secure comment = browseable = yes # Public public = yes writeable = yes vfs objects =

     

    I don't see "secure-cloud" on my network shares with this. also would like to make this private and only user accessible. any ideas what's wrong?? I restarted the server to refresh samba but no go. 

     

    Thank you!

  9. 5 hours ago, jonathanm said:

    I believe you will need to enable the NAT loopback or reflection settings in your router, if that's not possible, different router or router software. Alternatively you could set the internal IP to your domain in the hosts file if it's a single machine that's always inside the network.

     

     I have "Merlin" NAT loopback enabled in the router under the Firewall setting. I went to the appdata\nextcloud\www\nextcloud\config\config.php and changed 'overwrite.cli.url' from the internal IP 192.168.1.xx to the external https://IP:port. is that what you meant? didn't seem to solve the problem. 

  10. Hi everyone, I'm not sure if someone here can help me with this situation. I have my unRAID server with nextcloud running behind a VPN server with ability to port forward and the OpenVPN running on a ASUS Merlin router. I can easily access my Nextcloud server externally outside the home via a DDNS through my VPN provider. I am trying to figure out a way to have a single web address to be able to access it both remotely as well as when at home on the same network behind the VPN. Any idea how I can achieve this??

     

    thank you!

  11. Hi, trying to get Radeon RX 460 4GB GPU passthrough with a Asrock C2750 Avoton MB with Unraid 6.2.4 under Windows 10 with OMVF. Under graphics card only get VNC option.

    I only have a onboard GPU and the PCIe GPU above. It's headless so I don't need console access. I don't think this mobo supports iommu so is there no way to have GPU passthrough to Win10 VM? Will upgrading to 6.3 allow it to work? Thanks.

     

    00:00.0 Host bridge [0600]: Intel Corporation Atom processor C2000 SoC Transaction Router [8086:1f01] (rev 02)
    00:01.0 PCI bridge [0604]: Intel Corporation Atom processor C2000 PCIe Root Port 1 [8086:1f10] (rev 02)
    00:03.0 PCI bridge [0604]: Intel Corporation Atom processor C2000 PCIe Root Port 3 [8086:1f12] (rev 02)
    00:04.0 PCI bridge [0604]: Intel Corporation Atom processor C2000 PCIe Root Port 4 [8086:1f13] (rev 02)
    00:0e.0 Host bridge [0600]: Intel Corporation Atom processor C2000 RAS [8086:1f14] (rev 02)
    00:0f.0 IOMMU [0806]: Intel Corporation Atom processor C2000 RCEC [8086:1f16] (rev 02)
    00:13.0 System peripheral [0880]: Intel Corporation Atom processor C2000 SMBus 2.0 [8086:1f15] (rev 02)
    00:16.0 USB controller [0c03]: Intel Corporation Atom processor C2000 USB Enhanced Host Controller [8086:1f2c] (rev 02)
    00:17.0 SATA controller [0106]: Intel Corporation Atom processor C2000 AHCI SATA2 Controller [8086:1f22] (rev 02)
    00:18.0 SATA controller [0106]: Intel Corporation Atom processor C2000 AHCI SATA3 Controller [8086:1f32] (rev 02)
    00:1f.0 ISA bridge [0601]: Intel Corporation Atom processor C2000 PCU [8086:1f38] (rev 02)
    00:1f.3 SMBus [0c05]: Intel Corporation Atom processor C2000 PCU SMBus [8086:1f3c] (rev 02)
    01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 460] [1002:67ef] (rev cf)
    01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aae0]
    02:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
    03:01.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
    03:05.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
    03:07.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
    03:09.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch [10b5:8608] (rev ba)
    04:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9172 SATA 6Gb/s Controller [1b4b:9172] (rev 11)
    06:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
    07:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
    08:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller [1b4b:9230] (rev 11)

  12. Anyone having issues with torrents starting? after clicking on a magnet link and adding it, it adds it as a HASH code for the name and gets stuck on verification without being able to move forward. here is the log file:

     

    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.
    [2016-11-18 19:40:12.158] Transmission 2.92 (14714) started (session.c:738)
    [2016-11-18 19:40:12.158] RPC Server Adding address to whitelist: 127.0.0.1 (rpc-server.c:903)
    [2016-11-18 19:40:12.158] RPC Server Serving RPC and Web requests on port 127.0.0.1:9091/transmission/ (rpc-server.c:1110)
    [2016-11-18 19:40:12.158] Port Forwarding Stopped (port-forwarding.c:180)
    [2016-11-18 19:40:12.158] DHT Generating new id (tr-dht.c:311)
    [2016-11-18 19:40:12.159] Using settings from "/config" (daemon.c:528)
    [2016-11-18 19:40:12.159] Saved "/config/settings.json" (variant.c:1266)
    [2016-11-18 19:40:12.159] Watching "/watch" for new .torrent files (daemon.c:573)
    [2016-11-18 19:40:12.159] Blocklist "blocklist.bin" contains 76649 entries (blocklist.c:100)
    [2016-11-18 19:40:12.159] Loaded 1 torrents (session.c:2032)
    [2016-11-18 19:40:45.151] DHT Attempting bootstrap from dht.transmissionbt.com (tr-dht.c:249)
    [2016-11-18 19:46:11.150] Changed open file limit from 40960 to 1024 (fdlimit.c:380)
    [2016-11-18 19:46:11.150] Saved "/config/resume/***i have deleted the filename***.f180c6f3b47eca70.resume" (variant.c:1266)
    [2016-11-18 19:51:31.150] Saved "/config/torrents/570c7151340a67d4a89624ffd589ae49c1d0efcf.570c7151340a67d4.torrent" (variant.c:1266)
    [2016-11-18 19:51:31.150] 570c7151340a67d4a89624ffd589ae49c1d0efcf Queued for verification (verify.c:269)
    [2016-11-18 19:51:31.150] 570c7151340a67d4a89624ffd589ae49c1d0efcf Verifying torrent (verify.c:224)
    [2016-11-18 19:52:11.150] Saved "/config/resume/570c7151340a67d4a89624ffd589ae49c1d0efcf.570c7151340a67d4.resume" (variant.c:1266)

  13. I can't reproduce this.  Pulled transmission, stopped container, edited json file, restarted container, blocklist url remained at what I had configured it to.

     

    It seems if I edit the json file manually then it gets saved and no problems. if i edit the settings through the web gui and restart docker it doesn't seem to save.

  14. Anyone having a problem making changes to transmission that save for example adding a blocklist URL or changing values? Even when i edit the settings.json file with the container off and then turn the container on it still overwrites the settings which i believe is standard practice for the transmission daemon... But should i be turning off the transmission service with the container on, editting the settings.json file in the /defaults folder (in the container) and then enabling the service? I thought with there being a copy of settings.json kept in the appdata/transmission folder (outside the container) i should have been able to modify there?

     

    Having this issue as well as well as difficulty connecting to peers. Not sure if related to recent update.

  15. Getting an import error when trying to import from "tv" directory below:

     

    Import failed, path does not exist or is not accessible by Sonarr:

     

    I have correct mapping and directory permission with 777 and owner name: nobody; Group name: users

     

    This directory also happens to deny me access via SMB share when logged in as user with read/write access.

     

    total 16K
    drwsrwsrwx 1 nobody users   16 Oct 24 20:39 ./
    drwxrwxrwx 1 nobody users   60 Oct 20 21:49 ../
    -rwxrwxrwx 1 nobody users  11K Oct 24 21:15 .DS_Store*
    drwxrwxrwx 1 nobody users   10 Jan 10  2016 ebook/
    drwxrwxrwx 1 nobody users  103 Oct  8 08:35 movie/
    drwxrwxrwx 1 nobody users   31 Oct  8 08:35 music/
    drwxrwxrwx 1 nobody users 4.0K Oct 24 21:08 tv/

     

    What am I missing?

  16. Hello, when i upgraded to 6.3 rc2 i think i corrupted the below file.

     

    /boot/config/plugins/dynamix/dynamix.cfg

     

    the current file reads as following. does this look ok?

     

    [confirm]
    [display]
    warning="70"
    critical="90"
    hot="45"
    max="55"

     

    thank you!

  17. Hello, OSX El Capitan VM working great. I used this code to passthrough my USB drive:

     

    <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source dev='/dev/sda'/>
          <target dev='hdd' bus='sata'/>
        </disk>

     

    It works but when I restart the server the drives get re-assigned so the USB HDD may not be sda anymore. I don't want to passthrough the entire PIC-E USB 3 card as it has 4 USB ports and I use the devices connected to the other ports for other VMs. Any way I can passthrough using the product/vendor ID?

     

    Tried using below code as above post which also did not work for me:

     

    <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x05dc'/>
            <product id='0xa817'/>
          </source>
        </hostdev>

  18. Just upgraded from rc-1 to rc-2 and restarted the system. think i may have corrupted something. using safari and chrome and getting the following message at the top with the screen looking as follows:

     

    Warning: syntax error, unexpected '~' in /boot/config/plugins/dynamix/dynamix.cfg on line 1 in /usr/local/emhttp/plugins/dynamix/include/Wrappers.php on line 19 Warning: array_replace_recursive(): Argument #2 is not an array in /usr/local/emhttp/plugins/dynamix/include/Wrappers.php on line 19 Warning: extract() expects parameter 1 to be array, null given in /usr/local/emhttp/plugins/dynamix/template.php on line 30

     

    Any ideas??

     

    *SOLVED - i think the dynamix.cfg file got corrupted somehow. replaced it with an old copy. i hope i didn't mess anything up.

    unRAID_rc-2.jpg.a6eb517183ec3288a44b7ecb30cbf20e.jpg