Jump to content

vakilando

Moderators
  • Content Count

    21
  • Joined

  • Last visited

Community Reputation

10 Good

About vakilando

  • Rank
    Member

Converted

  • Gender
    Male
  • Location
    Germany

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I know and used ocrmypdf-auto and now I'm testing Paperless. Works good and has a nice gui. After your post I checked the pdf files (because I couldn't believe it) and yes, these seem to be the renamed original files without OCR text merged. But indeed I need the OCR output "merged" in the pdf file. It's absolutly necessary and essential! But I can't find any information to this. Did you find something about this? Ok, I've searched and found this issue on github: https://github.com/the-paperless-project/paperless/issues/681 In short: "...it looks like embedding the OCR'd text back into the PDF is not in scope for this project..."
  2. I have the same board. The manual indeed says: FB-FF > Reserved for future AMI error codes My board runs flawlessly and shows FF CPU: AMD Ryzen 3900X RAM: 64GB GPU 1: old nvidia GTX745 GPU 2: nvidia GTX1050TI Disks I have attached: USB: unraid boot (Kingston DT 101 G2) SATA 1: Array disk 1 (WD Red 4TB) SATA 2: Array disk 2 - (WD Red 4TB) SATA 3: parity disk 1 - (WD Red 4TB) SATA 4: Cache-1 btrfs raid (MX500 1TB) SATA 5: Cache-2 btrfs raid (MX500 1TB) SATA 6: UD for virtual machines (SanDisk SSD PLUS 480GB) SATA 7: old Samsung SSD for alternate boot with Windows (boot from SSD) SATA 8: empty LSI 9211-4i IT Mode Port 1: UD for Backup (WD Red 6TB) LSI 9211-4i IT Mode Port 2: UD for security cam videos (old WD Green 2TB) LSI 9211-4i IT Mode Port 3: empty LSI 9211-4i IT Mode Port 4: empty NVMe 1: empty NVMe 2: empty Never gave attention to FF 😬
  3. du meinst 197 (Current_Pending_Sector), ansonsten sehe ich das ähnlich. Vertrauen würde ich der Platte nicht mehr wirklich...
  4. Krusader als docker nehme ich an. Zeig doch mal deine Krusader Dockerkonfiguration (>Docker>Kursader>Edit). Nicht vergessen mit Klick auf "Show more settings ..." zu erweitern. "Advanced view" brauchen wir erst mal nicht denke ich. (Meine Beschreibung ist englisch weil ich noch die 6.8.3 habe, du wohl die neue Beta 25?)
  5. Kannst du - vom Mac mit dem Finder - dort Dateien in "Bilder" ablegen/speichern? Wenn ja: Siehst du in Unraid - wenn du rechts in der Spalte "Ansicht" im Ordner "Bilder" schaust - auch diese neuen Dateien?
  6. Ich schätze es liegt ein Berechtigungsproblem vor, vermutlich hast du mit dem User root die Daten auf das Array kopiert? Oder hast du einfach ein Verzeichnis auf dem Array angelegt? Wo hast du die Daten hinkopiert (/mnt/user/blabla oder /mnt/user0/blabla oder ...) hmmm, da fehlen einige Infos: wie hast du synchronisiert (kopiert, Synctool, ...)? von wo nach wo (ext. HDD an Mac zu Unraid, ext. HDD an Unraid zu Unraid, ...)? wenn von ext. HDD an Unraid zu Unraid, mit welchem User? wohin hast du synchronisiert, spricht stimmen die Freigaberechte? was heißt "Wenn ich aber in Unraid auf die Festplatte zugreife": von wo (Client, ssh, ...) nach wo (Browser, Freigabe (smb/nfs))
  7. Hallo, schau mal unter Settings > Network Settings. Dort solltest du beide Karten finden und auch das Gateway eingeben können. DNS Server (bis zu 3 Stück) habe ich nur bei eth0. Weiter unten unter "Interface Rules" kannst du die Zuweisungen der Netzwerkkarten (anhand der MAC Adresse) zu den eth0 / eth1 Port erledigen. Wenn dein Router (DSK/Kabel) DNS macht (und wenn auch nur Forwarding), solltest du diesen als Gateway eintragen - das könnte schon ausreichen. Ansonten Zuweisung unter "Interface Rules" mal ändern. Logisch....achte darauf, dass beide Netzwerkports per Kabel am Router/Switch (im selben VLAN, falls vorhanden) hängen.....
  8. Thanks! The procedure "back up, stop array, unassign, blkdiscard, assign back, start and format, restore backup" is no problem and not new for me (except of blkdiscard) as I had to do it as my cache disks died because of those ugly unnecessary writes on btrfs-cache-pool... As said before, I tend changing my cache to xfs with a singel disk an wait for the stable release 6.9.x Meanwhile I'll think about a new concept managing my disks. This is my configuration at the moment: Array of two disks with one parity (4+4+4TB WD red) 1 btrfs cache pool (raid1) for cache, docker appdata, docker and folder redirection for my VMs (2 MX500 1 TB) 1 UD for my VMs (1 SanDisk plus 480 GB) 1 UD for Backup data (6 TB WD red) 1 UD for nvr/cams (old 2 TB WD green) I still have two 1TB and one 480 GB SSDs lying around here..... I have to think about how I could use them with the new disk pools in 6.9
  9. No, I'm on 6.8.3 and I did not align the parition to 1MiB (its MBR: 4K-aligned). What is the benefit of aligning it to 1MiB? I mus have missed this "tuning" advice...
  10. ok, after I've executed the recommended command: mount -o remount -o space_cache=v2 /mnt/cache this ist the result after 7 hours of iotop -ao The running dockers were the same as my "Test 2" (all my dockers including mariadb and pydio) See the picture: It's better than before (less writes for loop2 and shfs) but it should be even less or what do you think?
  11. oh....sorry... I did not read the whole thread... Now I did! I'll try the fix now an do this: mount -o remount -o space_cache=v2 /mnt/cache
  12. perhaps I should mention, that I had my VMs on the cache pool before, but the performance was terrible. Since moving them to an unassigned disk their performance is really fine! Perhaps the poor performance was due to the massive writes on the cache pool....?
  13. Damn! My Server seems also to be affected... I had an unencrypted BTRFS RAID 1 with two SanDisk Plus 480 GB. Both died in quick succession (mor or less 2 weeks) after 2 year of use! So I bought two 1 TB Crucial MX500. As I didn't know about the problem I again made a unencrypted BTRFS RAID 1 (01 July 2020). As I found it strange that they died in quick succession I did some researches and found all those threads about massive writes on BTRFS cache disks. I made some tetst and here are the results. ### Test 1: running "iotop -ao" for 60 min: 2,54 GB [loop2] (see pic1) Docker Container running: The docker containers running during this test are the most important for me. I stopped Pydio and mariadb though its also important for me - see other tests for the reason... - ts-dnsserver - letsencrypt - BitwardenRS - Deconz - MQTT - MotionEye - Homeassistant - Duplicacy shfs writes: - Look pic1, are the shfs writes ok? I don't know... VMs running (all on Unassigned disk): - Linux Mint (my primary Client) - Win10 - Debian with SOGo Mail Server /usr/sbin/smartctl -A /dev/sdg | awk '$0~/LBAs/{ printf "TBW %.1f\n", $10 * 512 / 1024^4 }' => TBW 10.9 /usr/sbin/smartctl -A /dev/sdh | awk '$0~/LBAs/{ printf "TBW %.1f\n", $10 * 512 / 1024^4 }' => TBW 10.9 ### Test 2: running "iotop -ao" for 60 min: 3,29 GB [loop2] (see pic2) Docker Container running (almost all of my dockers): - ts-dnsserver - letsencrypt - BitwardenRS - Deconz - MQTT - MotionEye - Homeassistant - Duplicacy ---------------- - mariadb - Appdeamon - Xeoma - NodeRed-OfficialDocker - hacc - binhex-emby - embystat - pydio - picapport - portainer shfs writes: - Look pic2, there are massive shfs writes too! VMs running (all on Unassigned disk) - Linux Mint (my primary Client) - Win10 - Debian with SOGo Mail Server /usr/sbin/smartctl -A /dev/sdg | awk '$0~/LBAs/{ printf "TBW %.1f\n", $10 * 512 / 1024^4 }' => TBW 11 /usr/sbin/smartctl -A /dev/sdh | awk '$0~/LBAs/{ printf "TBW %.1f\n", $10 * 512 / 1024^4 }' => TBW 11 ### Test 3: running "iotop -ao" for 60 min: 3,04 GB [loop2] (see pic3) Docker Container running (almost all my dockers except mariadb/pydio!): - ts-dnsserver - letsencrypt - BitwardenRS - Deconz - MQTT - MotionEye - Homeassistant - Duplicacy ---------------- - Appdeamon - Xeoma - NodeRed-OfficialDocker - hacc - binhex-emby - embystat - picapport - portainer shfs writes: - Look at pic3, the shfs writes are clearly less without mariadb! (I also stopped pydio as it needs mariadb...) VMs running (all on Unassigned disk) - Linux Mint (my primary Client) - Win10 - Debian with SOGo Mail Server /usr/sbin/smartctl -A /dev/sdg | awk '$0~/LBAs/{ printf "TBW %.1f\n", $10 * 512 / 1024^4 }' => TBW 11 /usr/sbin/smartctl -A /dev/sdh | awk '$0~/LBAs/{ printf "TBW %.1f\n", $10 * 512 / 1024^4 }' => TBW 11 ### Test 4: running "iotop -ao" for 60 min: 6,23 M [loop2] (see pic4) Docker Container running: - none, but docker service is started shfs writes: - none VMs running (all on Unassigned disk) - Linux Mint (my primary Client) - Win10 - Debian with SOGo Mail Server /usr/sbin/smartctl -A /dev/sdg | awk '$0~/LBAs/{ printf "TBW %.1f\n", $10 * PLEASE resolve this problem in next stable release!!!!!!! Next weenkend I will remove the BTRFS RAID 1 Cache and go with one single XFS cache disk. If I ca do more analysis and research, please let me know. I'll do my best!
  14. I can and confirm that setting "Settings => Global Share Settings => Tunable (support Hard Links)" to NO resolves the problem. Strange thing is that I never had a problem with nfs shares before. The problems started after I upgraded my Unraid Server (mobo, cpu, ...) and installed a Linux Mint VM as my primary Client. I "migrated" the nfs settings (fstab) from my old Kubuntu Client (real hardware, no VM) to the new Linux Mint VM and the problems started. The old Kubuntu Client does not seem to have those problems... Perhaps also a client problem? kubuntu vs mint, nemo vs dolphin? I do not agree that NFS is an outdated archaic protocol, it works far better than SMB if you have Linux Clients!
  15. Ja, bin auch auf Github (Vakilando)