Uncledome

Members
  • Posts

    33
  • Joined

  • Last visited

Everything posted by Uncledome

  1. I guess I worded myself wrongly because I was talking about a cloud-app like nextcloud or owncloud. Basically everything I said up there is pointing in the end to a nextcloud docker on my unraid system. I would never expose my unraid server itself. But I'm still afraid that an attacker might get into my unraid server through the docker part of unraid or something like that. Thanks Cherry
  2. Hey guys, kinda question that maybe does not really fit here but I don't know where else to ask. Did a little search but found nothing that could push me to either side (open services to wan or access them through vpn). Basically: is your cloud exposed to WAN? if so, why and how "secure" is it to do that. My current setup would look like this: Three different subdomains handled by cloudflare to hide IP's and proxy them through cloudflare services ending on my ISP router on TCP 443 which is natted to my fortigate firewall on TCP 443. That traffic is checked for source (only allowing cloudflare ips) and then natted on the fortigate to a VLAN in unraid where the letsencrypt docker and the three services reside. So firewall side looks okay I guess but I still worry what will / could happen if someone cracks lets say the nextcloud instance through a security issue of nextcloud / proxy server. Because of this anxiety I have of not knowing if this is secure enough, I've currently disabled the WAN facing side of my setup and access it through vpn. But this kinda sucks because not accessible at work and cannot share files. Thanks Cherry
  3. Thanks a lot for all this information. I've now set it to write to the usb device and will have a look at the server for a few days before disabling it. Yea, I have a daily backup of the drive anyway in case it gives up :). Thank you very much Cheers Cherry
  4. Hey Frank1940, thanks for the fast answer. That is good to know. Do you know if there is a possibility to write that to an unraid share as well? Would like to keep logging 24/7 but not to the flash drive because of the many unnecessary writes that might degrade the lifespan. Does that mean the file won't be overwritten on reboot but rather a new one will be created? Thanks & Cheers Cherry
  5. Hey guys, first of: Sorry if this was already asked, I searched but found nothing. So now to my problem: Yesterday my unraid server just shut itself down. Does not look like it crashed but I can't tell because all the log files were empty. Only thing in them was the boot up process and everything afterwards. So I can't tell why it just shut down. As far as I understand, logs are written to RAM so that would explain why those were empty. Is there a way to have the unraid logs be persistent on my drives? I checked the Syslog Server under Settings but this seems to be for external resources writing to unraid. Maybe I'm just missing something. Would be great if someone could lead me in the right direction so I have the logs the next time this happens. Thanks & Cheers Cherry Ps. Might be that there is indeed a hint I missed for the shutdown in the diag files so I attached them. cherrytree-diagnostics-20191015-1832.zip
  6. We do have a docker. I am currently using a docker with my self hosted license. //EDIT: Ah, I just realized I was using a docker from DockerHub. (The official one from Atlassian) and it works great with updates and so on.
  7. I always used Atlassian Confluence for this and was more than happy seeing the image on unraid docker. But the license is like 10 bucks a year for up to 10 people if I remember correctly.
  8. Had a few issues after updating to rc3 (unraid was telling me partition issues and so on) but this was fixed with swapping usb to another port (had it on usb3 port, now usb2). Since then there were no other issues, docker is running smoothly until now.
  9. Just updated to rc3, currently rebooting. Will let you know if the problem persists.
  10. Sure, I'll forcefully shutdown the server now and will upgrade to 6.5.1-rc2. Should the server hang again, I'll let you guys know.
  11. There are multiple PIDs with status D, Tried the ls -l with one of them: root@Tower:~# ls -l /proc/26457/fd/ total 0 lrwx------ 1 root root 64 Mar 30 11:47 0 -> socket:[811239] l-wx------ 1 root root 64 Mar 30 11:47 1 -> pipe:[857897] lrwx------ 1 root root 64 Mar 30 11:47 2 -> /dev/null lrwx------ 1 root root 64 Mar 30 11:47 3 -> socket:[863915] root@Tower:~# As for the remote mapping, I actually never had any issues but normally (before unraid which I am currently in trial) I used an ubuntu server for my docker (hosted on a XenServer Free Edition Server) where I mapped the remote share in fstab and passed it through to the containers. First time having those issues in my life .
  12. Yea, multiple should have a mapping in their config. From the top of my head those are: Plex, Sabnzbd, Sonarr, Radarr, JDownloader, Deluge and maybe a few others (all my media is stored on the remote share)
  13. Sure, i've attached the syslog file because its too much to quote. syslog
  14. There you go: root@Tower:~# df -t tmpfs Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 32768 252 32516 1% /run tmpfs 16461524 0 16461524 0% /dev/shm cgroup_root 8192 0 8192 0% /sys/fs/cgroup tmpfs 131072 828 130244 1% /var/log shm 65536 0 65536 0% /var/lib/docker/containers/29665f5836321c13963824e87a964e6f041a7654c5d5a56d6878fc35bf8e7760/shm shm 65536 0 65536 0% /var/lib/docker/containers/9f4716af81f9e00f5e94c53ff602c56902e88c05fbd6f22e9e2296c8e43ff7dd/shm shm 65536 0 65536 0% /var/lib/docker/containers/034adf9c55653bb881968ca8d44355d9bdcb91702daabe289ca5fd0ed82a64d3/shm shm 65536 4 65532 1% /var/lib/docker/containers/a3ecdae744a46beea66d9caec718aec884a4b0793f67f5e5fe17a83f1318fa08/shm shm 65536 4 65532 1% /var/lib/docker/containers/dcf1c167e8731cfdeb87f2081e66eff6ec1812f9a5ee3d4c15f3caae0f39906d/shm shm 65536 8 65528 1% /var/lib/docker/containers/68e3ad2825e29640a8d090fe0fd3090a3353d3f66cee11ceb042b7fddad8f7ef/shm shm 65536 0 65536 0% /var/lib/docker/containers/1a2df515f75397057758aad73897792dc94411ac3aa98b7ae0bfe343fb70c366/shm shm 65536 0 65536 0% /var/lib/docker/containers/43349a50c0d6c50d4987fc88996c481cedfce67c2532fe1b44be8d237b4bb380/shm shm 65536 4 65532 1% /var/lib/docker/containers/7e44a412169eee513eafd0523eed2eabf216ce7d578640174a3ccb211b9e7cfa/shm shm 65536 0 65536 0% /var/lib/docker/containers/8294edae393ac797db55b6060f621bc1f9efade4645888c2e4c10f6ddf8bdb93/shm
  15. These are the last few lines before it stops: 0 /var/lib/docker/unraid/ca.backup2.datastore 0 /var/lib/docker/unraid 0 /var/lib/docker/containerd/daemon/io.containerd.content.v1.content/ingest 0 /var/lib/docker/containerd/daemon/io.containerd.content.v1.content 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs/active 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs/view 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs/snapshots 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.overlayfs/snapshots 0 /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.overlayfs 648 /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/734d4b9da9e20a4b96828270d6c3f6d7ffc10e3dcf3c8554247673e76b344dd4 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/19647dee6de0be1ca75840af5a88b29aea3aa6799b455ee1995094b51f83a2f0 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/60ddf788b24a00021d4034e37ec760f34467a1c9179437836e43704ce1e445cc 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/9f4716af81f9e00f5e94c53ff602c56902e88c05fbd6f22e9e2296c8e43ff7dd 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/a3ecdae744a46beea66d9caec718aec884a4b0793f67f5e5fe17a83f1318fa08 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/dcf1c167e8731cfdeb87f2081e66eff6ec1812f9a5ee3d4c15f3caae0f39906d 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/68e3ad2825e29640a8d090fe0fd3090a3353d3f66cee11ceb042b7fddad8f7ef 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/43349a50c0d6c50d4987fc88996c481cedfce67c2532fe1b44be8d237b4bb380 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7e44a412169eee513eafd0523eed2eabf216ce7d578640174a3ccb211b9e7cfa 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8294edae393ac797db55b6060f621bc1f9efade4645888c2e4c10f6ddf8bdb93 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby 0 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux 648 /var/lib/docker/containerd/daemon 648 /var/lib/docker/containerd 0 /var/lib/docker/tmp 0 /var/lib/docker/runtimes 39355848 /var/lib/docker root@Tower:~#
  16. there you go: root@Tower:~# ls -l /tower-diagnostics-20180330-0848/system total 192 -rw-rw-rw- 1 root root 0 Mar 30 08:48 df.txt -rw-rw-rw- 1 root root 1341 Mar 30 08:48 lscpu.txt -rw-rw-rw- 1 root root 2701 Mar 30 08:48 lsmod.txt -rw-rw-rw- 1 root root 4950 Mar 30 08:48 lsof.txt -rw-rw-rw- 1 root root 17032 Mar 30 08:48 lspci.txt -rw-rw-rw- 1 root root 1854 Mar 30 08:48 lsscsi.txt -rw-rw-rw- 1 root root 698 Mar 30 08:48 lsusb.txt -rw-rw-rw- 1 root root 252 Mar 30 08:48 memory.txt -rw-rw-rw- 1 root root 52961 Mar 30 08:48 ps.txt -rw-rw-rw- 1 root root 40168 Mar 30 08:48 top.txt -rw-rw-rw- 1 root root 51030 Mar 30 08:48 vars.txt
  17. Yea it is and I've posted the results right before bonienl
  18. df -h does nothing, just shows a empty line without text which cannot be exited with CTRL+C. So i have to restart my terminal. ifconfig -a -s shows: root@Tower:~# ifconfig -a -s Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg bond0 1500 23636415 0 38 0 21457397 0 0 0 BMPmRU br0 1500 5307429 0 998 0 4803019 0 0 0 BMRU docker0 1500 1528385 0 0 0 1729053 0 0 0 BMRU erspan0 1450 0 0 0 0 0 0 0 0 BM eth0 1500 23579910 0 38 0 21457397 0 0 0 BMsRU eth1 1500 56505 0 0 0 0 0 0 0 BMsRU gre0 1476 0 0 0 0 0 0 0 0 O gretap0 1462 0 0 0 0 0 0 0 0 BM ip_vti0 1364 0 0 0 0 0 0 0 0 O lo 65536 158225 0 0 0 158225 0 0 0 LRU sit0 1480 0 0 0 0 0 0 0 0 O tunl0 1480 0 0 0 0 0 0 0 0 O veth3121 1500 1190196 0 0 0 1389978 0 0 0 BMRU veth4e02 1500 0 0 0 0 4767 0 0 0 BMRU veth6f3e 1500 3166 0 0 0 7943 0 0 0 BMRU veth78c8 1500 11491 0 0 0 16615 0 0 0 BMRU vethb6b7 1500 0 0 0 0 4722 0 0 0 BMRU vethcf79 1500 3373 0 0 0 8134 0 0 0 BMRU vethdad5 1500 4939 0 0 0 10547 0 0 0 BMRU vethec2c 1500 0 0 0 0 4785 0 0 0 BMRU
  19. Sure root@Tower:/# tree tower-diagnostics-20180330-0848/ tower-diagnostics-20180330-0848/ ├── config ├── logs ├── qemu ├── shares ├── smart ├── system │ ├── df.txt │ ├── lscpu.txt │ ├── lsmod.txt │ ├── lsof.txt │ ├── lspci.txt │ ├── lsscsi.txt │ ├── lsusb.txt │ ├── memory.txt │ ├── ps.txt │ ├── top.txt │ └── vars.txt └── unRAID-6.5.0.txt 6 directories, 12 files root@Tower:/#
  20. Sure, Output of cd / and ls: root@Tower:~# cd / root@Tower:/# ls bin/ boot/ dev/ etc/ home/ init@ lib/ lib64/ mnt/ proc/ root/ run/ sbin/ sys/ tmp/ tower-diagnostics-20180330-0840/ tower-diagnostics-20180330-0841/ tower-diagnostics-20180330-0848/ usr/ var/ and output of ls Diag Folder: root@Tower:/# ls tower-diagnostics-20180330-0848/ config/ logs/ qemu/ shares/ smart/ system/ unRAID-6.5.0.txt root@Tower:/# I am currently still waiting for your answer before forcing the server down again.
  21. Okay, Server has issues again. Guess I'll be downgrading to 6.4.1 as I had no issues there. Just for your info: Diagnostic won't work, wether trying through WebUI nor directly from CLI. Just hangs right after "Starting diagnostic collection". As for the Docker command: true#8181/tcp:8181|#8181/tcp|#172.17.0.9#/mnt/user/appdata/tautulli:/config:rw|/mnt/user/appdata/PlexMediaServer/Library/Application Support/Plex Media Server/Logs/:/plex_logs:rw| true##1900/udp|3005/tcp|32400/tcp|32410/udp|32412/udp|32413/udp|32414/udp|32469/tcp|8324/tcp|##/mnt/user/appdata/PlexMediaServer/transcode:/transcode:rw|/mnt/disks/DS22_Media:/data:ro,slave|/mnt/user/appdata/PlexMediaServer:/config:rw| true#8989/tcp:8989|#8989/tcp|#172.17.0.7#/mnt/disks/DS22_Media//Downloads/NZBs/Complete:/downloads:rw,slave|/mnt/user/appdata/sonarr:/config:rw|/dev/rtc:/dev/rtc:ro|/mnt/disks/DS22_Media/:/tv:rw,slave| true#7878/tcp:7878|#7878/tcp|#172.17.0.8#/mnt/disks/DS22_Media//Downloads/NZBs/Complete:/downloads:rw,slave|/mnt/disks/DS22_Media/Movies:/movies:rw,slave|/mnt/user/appdata/radarr:/config:rw| true#443/tcp:444|80/tcp:8383|#443/tcp|80/tcp|#172.17.0.12#/mnt/user/appdata/heimdall:/config:rw| true#8090/tcp:8090|#8090/tcp|8091/tcp|#172.17.0.13#/mnt/user/appdata/confluence:/var/atlassian/application-data/confluence:rw,slave| true#80/tcp:8282|#443/tcp|80/tcp|#172.17.0.3#/mnt/user/appdata/organizr:/config:rw| true#5076/tcp:5076|#5075/tcp|5076/tcp|#172.17.0.5#/mnt/user/appdata/hydra2:/config:rw|/mnt/disks/DS22_Media/Downloads/NZBs/Complete:/downloads:ro,slave| false##58846/tcp|58946/tcp|58946/udp|8112/tcp|##/mnt/user/Torrent Downloads/:/downloads:rw|/mnt/user/appdata/deluge:/config:rw| true##10011/tcp|30033/tcp|41144/tcp|9987/udp|##/mnt/user/appdata/teamspeak3:/config:rw| true#8080/tcp:8080|9090/tcp:9090|#8080/tcp|9090/tcp|#172.17.0.6#/mnt/disks/DS22_Media/Downloads/NZBs/Complete:/downloads:rw,slave|/mnt/disks/DS22_Media/Downloads/NZBs/Incomplete:/incomplete-downloads:rw,slave|/mnt/user/appdata/sabnzbd:/config:rw| false#8181/tcp:8182|#8181/tcp|#172.17.0.2#/mnt/disks/DS22_Media/Downloads/NZBs/Complete:/downloads:rw,slave|/mnt/disks/DS22_Media/Music:/music:rw,slave|/mnt/user/appdata/headphones:/config:rw| true#5800/tcp:7803|5900/tcp:7903|#5800/tcp|5900/tcp|#172.17.0.10#/mnt/user/Handbrake/:/storage:ro|/mnt/user/Handbrake/watch:/watch:rw|/mnt/user/Handbrake/output:/output:rw|/mnt/user/appdata/HandBrake:/config:rw| true#8080/tcp:8082|#3389/tcp|8080/tcp|#172.17.0.11#/mnt/user/JDownloader Downloads/:/downloads:rw|/mnt/user/appdata/JDownloader2:/config:rw| true#5432/tcp:5432|#5432/tcp|#172.17.0.4#/mnt/cache/appdata/postgresql:/var/lib/postgresql:rw| true#8080/tcp:8081|#8080/tcp|#172.17.0.2#/:/rootfs:ro|/var/run:/var/run:rw|/sys:/sys:ro|/var/lib/docker/:/var/lib/docker/:ro| Deluge and Headphones were offline from the start as they do not automatically start with server startup (i disabled them).
  22. Okay, I started the diagnostics a few minutes ago when you posted how to do it. Looks like it us currently still running so I am guessing this might take a while. (at the moment: Starting diagnostics collection...) I'll edit this post once it is finished and I got the zip folder. //EDIT1: Okay, I have waited a few minutes and still nothing is happening. Is there a possibility to force the server to restart without losing the validity of my parity? Otherwise I'll have to force it down using the power button. //EDIT2: Okay, I've went with the force reboot. Had to reboot it afterwards again because the Passphrase Box was missing... The Diagnostic should be attached. (Made the Diag after reboot) tower-diagnostics-20180330-0552.zip
  23. How can I send you the diagnostic without having access to WebUI? I still can access with SSH & WinSCP.
  24. Tried both, first command told me nginx was not running so I used the start parameter. Afterwards it told me that the daemon is starting, status parameter confirmed this. the php-fpm could not be restarted, even with a force-quit, it just failed.