sonofdbn

Members
  • Posts

    492
  • Joined

  • Last visited

Everything posted by sonofdbn

  1. I've been looking for a suitable photo gallery docker, and came across PiGallery2 (https://hub.docker.com/r/bpatrik/pigallery2). It's on Dockerhub, and I'm able to get it via Community Apps. There is a docker run command provided on the container page in Dockerhub: docker run \ -p 80:80 \ -e NODE_ENV=production \ -v <path to your config file folder>/config.json:/pigallery2-release/config.json \ -v <path to your db file folder>/sqlite.db:/pigallery2-release/sqlite.db \ -v <path to your images folder>:/pigallery2-release/demo/images \ -v <path to your temp folder>:/pigallery2-release/demo/TEMP \ bpatrik/pigallery2:1.7.0-stretch I think I can fill out the generic unRAID template from this, but I'm running into a problem getting anything to download. My docker run command gives this result: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='pigallery2' --net='bridge' -e TZ="Asia/Singapore" -e HOST_OS="unRAID" -p '8092:80/tcp' -v '/mnt/user/appdata/pigallery2/config/config.json':'/pigallery2-release/config.json':'rw' -v '/mnt/user/Photos/Photos and Videos':'/pigallery2-release/demo/images':'rw' -v '/mnt/user/appdata/pigallery2/temp':'/pigallery2-release/demo/TEMP':'rw' 'bpatrik/pigallery2' Unable to find image 'bpatrik/pigallery2:latest' locally /usr/bin/docker: Error response from daemon: manifest for bpatrik/pigallery2:latest not found. It's entirely possible that some of the parameters are wrong, but I can work on those. My main concern is being unable to find the latest image. Is there something I can do to get the image to download? (I did try Piwigo, which I've seen recommended, but ran into some mySQL error when doing the indexing, and resolving that seemed beyond me and Google.)
  2. In the perhaps not unlikely event that the kids want to play games or watch Youtube be careful to check out the sound capabilities of any thin client software. Lots of thin clients including, I think, VNC, don't handle sound well, or at all.
  3. Yes, that was exactly the problem. Docker humming along nicely again. I did try to see whether PIA has some status page for their gateways, but didn't find anything. Next time I should just test via the desktop app.
  4. I suddenly found that there was no activity on the docker (no uploads or downloads). So I tried restarting the docker, and now I can't get to the WebUI. The browser error message is ERR_CONNECTION_REFUSED. I'm on 6.5.3 and am using PIA. I tried force-updating the docker, but same result. I'm getting these lines in the log, repeating continuously: 2019-08-14 00:20:33,812 DEBG 'start-script' stdout output: Wed Aug 14 00:20:33 2019 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts 2019-08-14 00:20:33,813 DEBG 'start-script' stdout output: Wed Aug 14 00:20:33 2019 TCP/UDP: Preserving recently used remote address: [AF_INET]x.x.x.x:x Wed Aug 14 00:20:33 2019 UDP link local: (not bound) Wed Aug 14 00:20:33 2019 UDP link remote: [AF_INET]x.x.x.x:x 2019-08-14 00:21:33,688 DEBG 'start-script' stdout output: Wed Aug 14 00:21:33 2019 [UNDEF] Inactivity timeout (--ping-restart), restarting 2019-08-14 00:21:33,688 DEBG 'start-script' stdout output: Wed Aug 14 00:21:33 2019 SIGHUP[soft,ping-restart] received, process restarting 2019-08-14 00:21:33,689 DEBG 'start-script' stdout output: Wed Aug 14 00:21:33 2019 WARNING: file 'credentials.conf' is group or others accessible Wed Aug 14 00:21:33 2019 OpenVPN 2.4.7 [git:makepkg/2b8aec62d5db2c17+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 19 2019 Wed Aug 14 00:21:33 2019 library versions: OpenSSL 1.1.1c 28 May 2019, LZO 2.10 If it's relevant, I recently updated to 6.7.2 but rolled back to 6.5.3.
  5. That would be the "better the devil you know" approach?
  6. I missed those reports; at least it's not just me. I'll stay on 6.5.3 for now.
  7. I recently upgraded from 6.5.3 to 6.7.2 and noticed that copying files from my cache drive (SSD) to the array has slowed down significantly. Previously I got transfer speeds of around 74 MB/s, now I'm getting 52 MB/s. I copy the files using Teracopy from my Win10 PC. This was very noticeable over a few file transfers. To verify this, I copied a 6GB file under 6.7.2 and then downgraded back to 6.5.3 and copied the same file, and saw the speed difference. Is there anything I need to change under 6.7.2? I've attached the diagnostic files under 6.7.2 and 6.5.3. tower-diagnostics-6.7.2_Slow.zip tower-diagnostics-6.5.3_Fast.zip
  8. More experienced network people will give you better answers, but I think there's a problem with your DHCP connection. If you're getting IP addresses that are in the form 169.254.x.x it usually means that the machine/system has issued its own internal IP address because it hasn't connected to a DHCP server to get a "normal" internal network address. For example, a common internal network address is of the form 192.168.x.x. Look at your router config to see what range of IP addresses it gives out; you should also be able to see what clients are connected to the router (via MAC ID at least). Or see what IP address your Win10 tower has.
  9. Long-time unRAID user: amazed to see that I joined this forum (well, the old Lime-technology one) in 2006! With great support from the forum I've moved from a barebones file server to running VMs and dockers.
  10. I hesitate to offer this as a "solution", because I'm sure there are better ways, but what has helped me a lot with Linux permissions problems is WinSCP. It's a bit like a Linux-oriented file explorer that you run on Windows (it's actually much more than that). As a Windows person I struggle sometimes with Linux permissions. The useful thing about WinSCP is that you get to see permissions and owners on files and folders in unRAID in a Windows GUI environment, and even better, you can change those permissions and owners very easily. This is perhaps the dangerous part, so please proceed with care.
  11. I'm no expert, but I do have Minimserver running. It might help if I share my settings. From the docker template for my Minimserver docker: the repository I'm using is nielsdb97/docker-minimserver, network type is Host and I have two container ports: 9790 and 9791. For the container path /media I have /mnt/user/Music/. You need to have the docker started and running so that Minimwatch on Windows can connect to (and configure) Minimserver. My recollection is that the T&C acceptance is done from Minimwatch. Right-click on the Minimwatch icon and perhaps under Properties or About there's a T&C acceptance option. (It's no longer there once everything is up and running.) Unfortunately I can't recall exactly what happened at the start. According to the Minimserver documentation, if your Minimwatch icon is grey, it means it's not connected to Minimserver. I think you need to 1. get the docker started; 2. launch Minimwatch on your Windows machine (it should search for, find, and connect to the Minimserver docker automatically) and then 3. accept the T&C via Minimwatch.
  12. IIRC, unRAID didn't have a cache drive feature initially. The reason I liked unRAID was the protection provided by the parity drive, and I think there's a case to be made that this was the main selling point of unRAID at the time. When the cache drive feature came out I decided not to use it for caching writes (and still don't) because I wanted the assurance that once I had written data to the server it was parity-protected. Sure, the risk of cache drive failure before the data is moved is low, but for me the peace of mind outweighed the speed gain.
  13. Yes, I know there are actually quite a few alternatives, but this one seems suitable for my use case.
  14. Has anyone successfully installed a GoogleHomeKodi (henceforth GHK) docker? GHK is a way to control Kodi using Google Home, and one component is having a GHK docker (there are alternatives to this component). I found the docker on DockerHub via community applications, but it doesn't come with any template. I fiddled around trying to add parameters manually, but it's way above my level of inexpertise. I did find a template (by @CHBMB) but don't know how to use that with the docker installation. While I could play around a bit more, I thought I'd ask here first, hoping this will save me a lot of time.
  15. Good to know. Any experience with SSDs? My Googling didn't turn up anything that looked credible, and the specs of my new Sandisk SSD don't mention power consumption at all.
  16. So does this mean that since 2 disks could need 4 to 5 amps during spin up, I shouldn't use any SATA power splitter at all for hard disks? What about 1 hard disk and 1 SSD, or 2 SSDs?
  17. Thanks so much for the answers; time to get a new drive. (On my current setup, metadata takes up 5GB on each disk.)
  18. I'm on 6.5.3 and run a two disk (SSD) btrfs RAID1 cache pool. One disk is 525GB and one is 1TB. My question is what is my pool size (in terms of how much data I can store on it)? My guess is that with the two disks I have, and if I'm using RAID 1, my pool size is actually 525GB. I'm thinking of increasing the size of my cache pool, and I'd like to replace the smaller drive with a 1TB drive. Will that increase my pool size to 1TB? I read that btrfs also stores some metadata: does that take up a lot of space?
  19. Thanks - everything fixed now. As suggested, I stopped the container, set the Nextcloud share to use cache disk and ran the mover. After the mover was done, there were no more Nextcloud files on the cache disk. I remembered to set the share to not use the cache disk before restarting the container 😉
  20. Is this a Nextcloud configuration issue or a docker setting issue? My Nextcloud share has "Use cache disk" set to No. In the docker settings I have container path /data set to /mnt/user/Nextcloud. I thought perhaps it might have something to do with something raised on the first page of this topic (see below), but I don't know how to fix this. When I set up the data folder I probably used /data instead of a Nextcloud specific folder (which I think is suggested in the linked image above). If this is the problem, is there any way of reconfiguring this to keep files off the cache? Now that I think of it, I'm sure I used @SpaceInvaderOne's video to set up Nextcloud, and on looking at it again, I see that originally the Nextcloud share is set to use the cache disk. So it's likely that I did that initially and then set it to not use the cache disk later. So if the share is now set to not use the cache disk, does this mean that no new files will be written to the cache? Will updates to files already on the cache be kept on the cache?
  21. I'm running 16.0.1 (I think I started with 14.0), and recently found that I have data files on my cache drive at /mnt/cache/Nextcloud. I didn't expect this because my Nextcloud share is set to not use the cache drive, and the files are taking up a lot of my cache drive. I'd like to get the files off my cache drive. Can I use, say MC, to move the files to a data drive which already has a Nextcloud share folder on it? e.g., /mnt/disk1/Nextcloud without messing up my Nextcloud docker? Is there another or better way of fixing this? Also, how do I prevent files from being written to /mnt/cache/Nextcloud in future?
  22. Thanks. I didn't realise that's what happened with magnets; and now I see the torrent files are in the session folder.
  23. I have a lot of .meta files in my download directory. Is it safe to delete them? If it is relevant, I usually use a magnet file to start a download, and I notice that the download starts with a META file before switching to the actual file. I currently have no such META files downloading, but there are still a lot of .meta files in the download directory.
  24. Not thinking of VLAN tagging. I only have a vague idea of what it is. All the devices are close enough to connect to a single switch. I do have a bunch of switches in various places all linked in a bit of a mess, but for now I just want to understand if there are any real benefits to having larger switches rather than smaller ones, where possible.