Jump to content

hawihoney

Members
  • Content Count

    696
  • Joined

  • Last visited

Community Reputation

8 Neutral

About hawihoney

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for your answer. What puzzles me is that it happened during start of the first parity check (non-correctional). This one was immediately canceled. The then re-started parity check (correctional) does not mention any problems at all. BTW: It's a LSI 9300-8e HBA connected to a Supermicro BPN-SAS2-846EL1 backplane. I guess, if the correctional parity check comes to a successful end I simply can ignore these 3x128 errors in the error column?
  2. Today I did start a non-correctional parity check on a dual parity array. Within the first 5 GB Unraid did report three disks with read errors (128 each) but did go on with the parity check. I immediately canceled the non-correctional parity check. After that I did start a correctional parity check. This correctional parity check went thru the first 5 GB without any warnings/errors. What's the status? The read errors just happened once? Why? The disks in question don't show any SMART problems. Something I need to worry about? Many thanks in advance. Diagnostics attached. towervm01-diagnostics-20190703-0700.zip
  3. All three machines back to 6.7.0 now. Everythings good. Sorry for my frustrated Post.
  4. Yes, two NVMe building a cache. Each on its own PCIe x4 card.
  5. We're running plugins, dockers AND VMs. I wrote that in the answer to your post. No need to discuss descisions that happened in the past. It was running perfect. But I will change some things, definetely.
  6. Yes, I do see that now. Three events occured at the same time. I never thought about this: * SQLite was thrown out in 6.7.1 and put back in 6.7.2. * Unraid NVIDIA 6.7.2 release is delayed. * In addition, lots of security patched have been applied since 6.7 As I wrote above. I'm still learning. I will change that, definetely.
  7. We have tons of self written scripts that create, manipulate and extract databases (MariaDB and SQLite). Many of them running automatically from within Unraid User Scripts. Some PHP, some Perl, some bash, ... There's for example a 30GB SQLite database that simply holds personal names and their relations. MariaDB is running here as well, but for some jobs it's not fast enough. Running that same, identical 30GB database on MariaDB was a pain - slow as hell. We thought it would be a good idea to put everything from Windows onto the Unraid server and use the infrastructure of plugins, dockers and VMs. I didn't expect to fall into such a hole. 40 years of software development and I'm still learning. For me it seems that there's much manual activity envolved when maintaining a plugin such Unraid NVIDIA. I was under the impression that these tasks are mainly automated. As I said. I'm still learning. Here's one of the dumps that is failing since 6.7.1. Boom, without notice. echo ".dump" | sqlite3 /mnt/cache/system/appdata/SQLite/Similar/similar.db > /mnt/user/Data/sqlite_backup/Similar/dump.sql
  8. As a user of Unraid NVIDIA _and_ tools using SQLite the delayed 6.7.2 Unraid NVIDIA release is a real problem here. We can't change back to stock Unraid. On the other side some important SQLite based tools don't work any longer. In fact it has bitten us because after applying 6.7.1 SQLite tools and SQLite dumps did overwrite backups with empty files. We simply did not expect that somebody would remove a tool like SQLite from Unraid. Now Unraid 6.7.2 is out and SQLite is back - but not for us. We have to wait for the Unraid NVIDIA 6.7.2 release. Going back to 6.7.0 without these additional security patches is no option either. So now we have lot of time to change our own SQLite tools to check for SQLite in Unraid before dumping data or whatever. New data is not coming into the house - so everything cool, no? Just some other 0.02 USD.
  9. I do restart and backup all docker containers every night with User Scripts. So there's a User Script for each and every docker. I had to move the start of the User Scripts to a different time in the night but that was easy enough (crontab -e). docker stop <containername> # Backup tasks depending on specific docker containers # e.g. rsync user content to backup machine # e.g. cp settings to backup machine (e.g. Plex watch state and settings) # e.g. dump or export database contents (e.g. MariaDB, SQLite, ...) # e.g. ... docker start <containername> After these scripts are being started every night I no longer experienced any problems. Specific conditions would be cool, but if you start to collect conditions it will become a can of worms IMHO. RAM usage Port not responding + Plex activity from remote users (won't restart and backup Plex if there is streaming activity) + Running parity check on remote backup server (won't backup if a parity check is running) ... You get the idea. The possible conditions are nearly endless.
  10. It depends: If you start from scratch I suggest to go Nextcloud only. If you already have a filled document archiv I suggest to go Nextcloud with external storage. If you add/modify/delete files from outside of Nextcloud as well, I would go that way too. When we started with Nextcloud we already had thousands of documents in well organized shares and folders. Until today some people work with the shares. We didn't want to stopp that at first. External storage is perfect for that Workflow. So we're still using External Storage in Nextcloud. BTW, the only thing that's missing in Nextcloud is better Notes support. There are 2-3, some with weird formatting, some didn't work.
  11. Exactly my experience. My two Unraid VMs work at full speed when using SMB. It's really stable. But I always need to remember that I should never copy large files from bare metal server to a VM. There's a 100% chance that the complete system crashes. For example, I do not copy backup files from bare metal to VM. I initiate the fetch from the VMs instead. Some months ago I asked here, if there's a way to find out if a Parity Check is running on a different machine. Currently my backup jobs start even if the source machine is under heavy load. So I need some techniques to store a hidden file whenever a system is under heavy load - maybe Parity Check, maybe Plex activity, ... And a different machine (in my case a VM) needs a way to look for that file in a User Script. I'm pretty sure it must be easy, but I didn't find a way to ask the system "Hey, are you running a Parity Check" or "Hey, is there activity in Plex" ...
  12. Is it just me: Since release of 6.7 I try to download it from 3 servers. At least 1min per percentage. I'm sitting on a fast lane here. Everything else is really fast. What's the matter with that ZIP?
  13. Look here: https://blog.linuxserver.io/2017/05/10/installing-nextcloud-on-unraid-with-letsencrypt-reverse-proxy/
  14. Sorry, that's way beyond my knowledge.