• Posts

  • Joined

  • Last visited

Everything posted by BigChris

  1. Hello, I am trying unsuccessfully to set up a wireguard tunnel. I have proceeded as here: But it does not work. When I connect with my Android client, I see in Unraid in the settings that the values Data received and Data sent change. At Last Handshake it says not received. When I try to call a page, it is not possible. I can't ping any IP from the network, not even the unraid server itself. The port forwarding should be correct. Where can the problem be?
  2. This is a share on my array. Nothing is mounted there, I meant that the container path /var/cache/zoneminder points there. Is that ok?
  3. Thanks again for your quick help. I have deleted the Zoneminder container and also removed the template. Then I reinstalled the zoneminder from the apps. The data path is now only present once, but the memory usage is immediately high again. should i have deleted the app-data folder as well? but it is actually on the share.
  4. I noticed that some shares give me the message "Some or all files are unprotected". When I start the mover, this does not change anything. Some data remains on the cache. What's strange is that it's data that I haven't needed for a while, for example from an older film. What is the reason for this? It almost looks like the data exists twice, on the array and the cache. I have attached an example as screenshots. What can I do?
  5. I mounted the data path there. Container Path: /var/cache/zoneminder I just see that this is present twice for me. See screenshot.
  6. I have the problem that my zoneminder image is getting bigger and bigger. The current unraid shows 3.8GB. I have swapped out the following: /mnt/user/appdata/Zoneminder Container Path: /config Data Path: /mnt/user/zoneminder-data/ Container Path: /var/cache/zoneminder I have noticed that the file frames.ibd, for example, is very large. 2.3GB. This is located in appdata/Zoneminder/mysql/zm Can I swap out the path / file? Or what can I do to stop my Zoneminder container from growing?
  7. I have not yet understood the principle. Can I just take the data from disk2 (/mn7/disk2) and move it back up to disk2 after formatting? Won't that mess up the array (/mnt/user)?
  8. I decided to use the squid method. The parity is being rebuilt, I think it worked. Thank you both very much for the quick and great help.
  9. @itimpi I could also just upload the data again via FTP. I just don't know exactly what the right way is. Do I simply copy the data to hard disk 2? I mean on /mnt/disk2 or do I copy the data to the array, I mean /mnt/user and there into the respective folders. Or is squid's method the best way? I'm worried about both right now....
  10. I'm afraid that was my mistake. I thought I could change the file system when replacing the hard disk. Before installing the new hard disk, I changed Disk2 (the slot where the hard disk was) from btrfs to XFS. I guess that was the mistake. What can I do now? Can I simply import the data manually - from the old disk to the new disk? Or how should I proceed?
  11. I don't know what happened. I wanted to replace a data hard disk. The first thing I did was to replace the parity hard disk with a larger one. To do this, I plugged in the larger hard disk as a second parity hard disk. After the parity sync process, I removed the old parity hard disk. Then I replaced the data hard disk that I wanted to swap with the old parity hard disk. I did the following. - Array stopped - Set the data hard disk to none. - Shut down the server - Replaced data hard disk - Assigned new data hard disk to the hard disk slot. - Array started The hard disk was formatted by unraid and the rebuild process started. After the rebuild process, however, the new data hard disk was completely empty and the data of the old drive had disappeared! What happened? What did I do wrong when changing the data hard disk? Should I not have changed the file system? (I changed from btrfs to xfs). How do I proceed now? I still have the old data hard disk with the status before the removal. Can I simply get the data back?
  12. After the update today I get the following error message when calling zoneminder: Unable to connect to ZM db.SQLSTATE[HY000] [2002] No such file or directory The container is automatically terminated. What can I do? Update I fix it wirth this: Because of the change in how shared memory is set up, you will need to use the new Zoneminder template:;, Do this procedure: - Remove the docker image. All your data and settings will be kept. - Click on add a container and delete the Zoneminder template. - Go to he apps and get the new template. Read about adjusting shared memory and be sure to increase until you get to less than 50% on the Zoneminder console.
  13. is it possible to deactivate ssl? With everything that came to mind, the web frontend is no longer accessible afterwards.
  14. Thank you! Works great. Now i check the playback today.
  15. The title was difficult to describe. I have the following problem: System: Dell T20 Unraid, with Emby as Docker, from emby. Client: Nvidia Shield If I now watch a movie from the array, then I get the message in Kodi sometime: Source too slow for smooth playback. The film stops for a short time and then runs normally until the end. If I watch Emby directly (via the app) the film starts to stutter. Audio continues to run normally. Then I have to stop and restart the playback. Does anyone else have this problem? Then I read in the unraid forum that if there are problems with transfer speeds from the array the option "Settings -> Global Share Settings -> Tunable (enable Direct IO) should be set to YES. I did that once and it actually accelerates reading to 110MB/s. Before I had less about 50MB/s, but that should be enough for a smooth playback. However, the Docker Emby then exits with the error message: SQLitePCL.pretty.SQLiteException: IOError: disk I/O error Anybody here have any experience of yours? The problem does not occur when reading from the cache.
  16. Hello. I have two hard drives in my array that I'm a little worried about. Hard disk 1 shows the following under Attributes: 199 UDMA CRC error count 0x0032 200 092 000 Old age Always Never 514 Hard disk 3 this: 187 Reported uncorrect 0x0032 099 099 000 Old age Always Never 1 Both hard disks show SMART extendes self-test: Completed with no errors Do I have to worry about the two hard disks, respectively should I change them soon?
  17. Hello, I wanted to change my parity disc and thought it would be good if I always had a valid parity during the whole process. To do this, I set up a larger hard disk in parity 2 and after sync i removed the first parity disk. Now I have an array, with 3 data disks and a ParityDisk2. This looks strange, but is it also a problem? Could I assign Parity2 to Slot Parity? (as a cosmetic measure) Or are the slots not interchangeable? Translated with
  18. By quasi idle I mean that only my dockers are running and I don't make data access from outside. Following dockers run at my place: - Zoneminder video surveillance with four H.264 full HD cameras - 2 x mariasql - ubiquiti unifi - piwigo photo management - Nginx ProxyManager - emby server I'm guessing that Zoneminder creates the load.
  19. My Unraid is a Dell T20 server with 32GB RAM and an Intel Xeon E3-1225 v3 CPU. In addition there is a parity hard disk and 2 data hard disks as well as a cachepool with 2 ssd hard disks. The overall load is at times 62%. (in quasi idle operation on, only docker, no access to shares) Beside severall shares I have 8 docker containers in which among other things my video surveillance Zoneminder runs with 4 cameras. (I plan to upgrade two to 3 cameras). In addition there are two virtual machines with ubuntu server installations. Does it make sense to upgrade with this load? Which CPU / Mainboard could you recommend for the above mentioned requirements? My server runs 24/7, so the power consumption would be relevant.
  20. BigChris


    I try this, and edit the /etc/nfsmount.conf to this: # Protocol Version [2,3,4] # This defines the default protocol version which will # be used to start the negotiation with the server. Defaultvers=3 # # Setting this option makes it mandatory the server supports the # given version. The mount will fail if the given version is # not support by the server. Nfsvers=4 But if i try to mount sudo mount -t nfs -o nfsvers=4 /mnt/nfsv4-test/ I got this: mount.nfs: Protocol not supported What am I doing wrong?
  21. Since the last update I can't reach the webinterface anymore. I tried different configurations. (Bridge/Host...) I make the call from the Docker, or manually. I can ping the IP successfully A status query in the console results in the following: # service zoneminder status 12/18/18 10:39:44.556974 zmpkg[889].INF [main:57] [Command: status] 12/18/18 10:39:44.562528 zmpkg[889].INF [main:305] [Sanity checking States table...] 12/18/18 10:39:44.566693 zmpkg[889].INF [main:97] [Command: status] ZoneMinder is running I try a clean new installinig from CA with default settings. Same error What can be the problem?
  22. Back to 6.4.1, no issues. Don't think it is an hardware problem.