MrSliff

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by MrSliff

  1. In case anybody has problems finding the right tools. Theres an open source alternative "openSeaChest" on gihub which seems to work too. It has releases for different types of platforms: https://github.com/Seagate/openSeaChest
  2. Hi, so at the end it seems the problem was running Duplicati with external Backup in parallel to any other Copying/Backup inside of the array. It was causing long CPU and RAM spikes which made my system unresponsive. Docker Apps and VMs were still reachable and responsive, but Unraid itself was not. I changed to Borg Backup now for the external Backups, did not have any error or hiccup anymore since then. CPU and RAM spikes went away. Still have to do the MEMTEST, did not do that yet.
  3. My new cache disk is always at "high" reading bandwidth. Is that normal (for a ZFS drive)? My other ZFS drives dont do that. This one is hosting my appdata and domains folders
  4. Yea i can understand this. There is some trust you have to put into this from both sides to either not try to decrypt the backups or to not store any criminal stuff on the other side. You have no control over the data other than encrypting it. But i really like the idea of solutions like SIA, store your data decentralized all over the world in a Raid-Like protected storage. The problem is, there is no solution yet to natively bind your storage as a volume or disk. Like mounting via rclone or fuse.
  5. Just had this idea, could not find this anywhere. Some of you maybe know SIA, a decentralized cloud based storage, where everybody can rent and provide decentralized storage. I like this idea. Maybe there are other people around, which want to back up some stuff, but dont want to rely on one provider. I back up my stuff externally on a Storage Box at Hetzner. But its like a "single point of failure". Sure i have backups locally too, but i would be more calmed, if i knew, there is some backup somewhere else. For example my Bitwarden-Safe and very important documents. I already spread them a bit (locally, encrypted USB drives, external backup), but what if everything close would get lost? So maybe there are ppl who like to share some GB of their storage to each other?
  6. Ok, i will try something else first, since i have the feeling it may be because of different tasks accessing the same data at the same time: I have multiple daily ZFS snapshots and replication tasks which run overnight, but at the same time i also have a duplicati backup task running, which does an offsite backup of my replication target. I will first stop the duplicati task and see if its better then. If thats hopefully the reason, i will switch to another solution like borg backup or something else. If not, i will do that memtest when i have a 24h time slot.
  7. So, this night the system crashed. No response now. But i have a Syslog from the whole night. Maybe a memory issue? One thing to maybe mention: I switched to ZFS on my cache disks some weeks ago and also put one ZFS disk into my array for replication and snapshotting my cache disks since they are not protected. Maybe theres the reason for crashes recently. Set up Spaceinvaderones Scripts for Snapshots and replication on every dataset. So there are about 6 scripts running one after another overnight. (Would love to see some kind of GUI plugin to handle this ) syslog-192.168.20.3.log
  8. So, here the syslog you asked for. What is quite interesting is the fact that the file integrity plugin started over night (24th of october at 04:31 am) and after that multiple running PIDs became unresponsive. Maybe that plugin is the reason. This morning i had quite some problems. Could not reboot unraid and i could not stop the array manually. After stopping the docker process and after a hard reset of the machine, my docker image was corrupted. had to delete it and reinstall all docker containers. Some other weird thing was one of the network interfaces was high on cpu with a ping script. Also saw quite many tasks responding to WAN IP addresses (mostly turkish ip range) on Port 445 (SMB). Weird that, i dont know what could cause this. syslog-192.168.20.3.log
  9. Apparently the server does not crash, its just unresponsive. I did not reset the server over night, but its available again without a hang/rash or similar. So there may be something very demanding, which makes everything unresponsive. Anyways, i enabled the syslog server now.
  10. Hi together, i encounter some problems in the last weeks with the connectivity of my Unraid server. I recognized mostly in the mornings, that the CPU goes crazy (tried to watch htop and the most demanding was sshfs, but it was hard to get a stable connection). When this happens, i also have problems connecting to ssh, gui and smb. Yesterday the server was unresponsive the whole day. However, i still could connect to my VMs and all Docker services. Only Unraid related services were unresponsive. Today, everything is fine again, i did not restart the server so it fixed by itself, which leads me to assume, its something like a backup or something else. What i changed recently: - I recently converted my Cache pools to ZFS and added one ZFS Disk to the Array to have some ZFS functionality like Snapshots and replication for my unprotected Cache, also set up replication to run in the night time. - Accordingly set up duplicati to back up the replicated data daily (i assume its only backing up the changes and not the whole data) - I changed the Servarr Stack including qBittorrent to use hard links, with qbittorrent set to seed for 30 Days (i know the disks are spinning 24/7 now due to this, currently seeding like around 280-300 torrents) I sadly dont know where to start searching, because i could not find that one docker or service which causes this. Maybe there is some service with which i can record htop-like logs to see whats going on, also maybe some unraid log recording. Not a problem to record for a week or so, disk space is not a problem. Thanks for helping out. unraid-diagnostics-20231021-0955.zip
  11. I experienced this problem today when i wanted to start to move Files off of my cache to the Array to convert my cache to ZFS. All shares which were configured to either Cache only or to Move Data Array -> Cache did not update the Free Disk size. They only showed the Free space on the Cache. In addition to this, the Mover did not take any action with manual Mover Triggering. For example appdata was configured like this: Primary: cache Secondary: Array Mover: Array -> Cache After changing Mover to Cache -> Array, the Free sie was still just the free size of the Cache. Only a manual move via the File Manager got it to report the right size like in the picture above. I dont know if this is a Bug, so please move me to General Support if this is another problem.
  12. Hi folks, I am trying to get Harvester HCI (a Hypervisor based on Rancher and Kubernetes) running as a VM. This OS is intended to be used bare-metal, but can also be used in a VM. Nested Virtualization is supported by my System and enabled in unraid. Running the VM and creating VM inside is not a problem. I have another Problem: Harvester needs two NICs for its Management- and VLAN-Network for VMs. I added two interfaces to the VM: br0 as the untagged management NIC and br0.1010 for the VLAN-Network. I think the problem is the already tagged br0.1010 NIC which i use, because inside the VM i configured this NIC as VLAN 1010 (i think this gets tagged already inside the VM). What i think i need is some kind of "trunk"-interface or an interface which is untagged or accepts tagged traffic from inside the VM. I hope you understand what i mean. Can anybody point me in the right direction how to config the NIC in the XML for the VM? I attached a picture of the official Documentation and here is the Link for the VLAN related stuff from Harvester: https://docs.harvesterhci.io/v0.3/networking/harvester-network/
  13. Hey mate, appreciate your work!

     

    Would you mind to do a vpn container with preferably nicotine+ or alternatively soulseek?

     

    Heres the link to Nicotine+: https://nicotine-plus.org/

     

    The port forwarding feature in your containers are so useful, that i would like to use it with Soulseek.

     

    That would be great.

  14. Hey @binhex and all other who might help me. I successfully got the qbittorrent docker running also with connectiong my *arr services to it. I have two (only one, privoxy works fine ) problems which i couldnt resolve yet. First one is to access qbittorrent and the *arrs (since these are now part of the qbittorrent net) through nginxproxymanager via domain: Accessing the services (torrent and *arrs) via [IP-Adrress]:[Port] works just fine. But i have set up my home network to resolve domain names and proxied them through nginxproxymanager. But this does not work, i always get a timeout when trying to access via domain. Is there any setting necessary to get this to work? Second one is privoxy: NVM it just works now.
  15. Hey, had to do a similar consideration in the beginning of the year. Replace just my i3-8100 for a better one or switch the platform completely. For my budget there were 2 options: - Buy a i9-9900k and replace it with the i3-8100 - Switch to AMD and buy a CPU based on AM3 socket to have a future proof system in terms of cpu-upgradability. In the end i chose the 9900k, because with plex i.e. you can use quicksync for transcoding the video streams, so it uses HW-Accelerated transcoding. In another point i did not need to buy an extra Graphics card for plex transcoding. Since i have a uATX build, there is restricted space for more Hardware. I only have a x16 and a x4 slot on my board. The x4 is already occupied by a i350 Quad Gigabit Network card. Just to mention: The i3-8100 socket also takes the 9000 series CPUs, there just may be a BIOS update of the mainboard necessary. Downside is, this would be the only CPU upgrade you could do on your mainboard before switching to a newer architecture.
  16. Hello Guys, i have a problem with the automatic importer for scanned documents. First i describe my use case: I am using the app genius scan to scan my documents and to upload them via a WebDAV Folder in another nginx container. This works well so far, documents are automatically uploaded with nobody:users as the owner. I set up the papermerge import folder to point to this nginx folder (-> /appdata/nginx/www/papermerge-inbox) to automatically grab the documents. However, papermerge recognizes the documents and imports the very first one in the order, but it does not delete them automatically. This results in papermerge trying to import always the first document and after some time i have 20 times of the first document in my inbox in papermerge. Can anybody help me with a solution here?