casperse

Members
  • Posts

    810
  • Joined

  • Last visited

Everything posted by casperse

  1. So I added the pcie_aspm=of and did a reboot and startet to rebuild and no errors but after some time I start getting these again?
  2. It only appeared at start up and I have not seen it since. I can see that it it did a backup of the USB to "My server" so that also worked! That's a good thing right. Like this: So the error is not related to drive failures but nvme drive errors (because of the SMART transfer warnings) Again thanks for helping me out! I would never have guessed that my Platinum Corsair AX860i power supply would cause problems, actually think the have a very long warranty have to check that.
  3. Okay @JorgeBI got a brand new 1000W Corsair PSU and I just booted the server now I get a new error message: I am pretty sure I have a backup on my Unraid account? BUT I can see the drive 18 is started to rebuild! and the logs doesn't show any errors! so far so good! Getting new errors again, but its still rebuilding (ETA 4 days!) So what should I do now? wait for two drives to rebuild? Do I need a new USB for Unraid?
  4. @JorgeB That could explain it! I dont believe its cables or the controller. So PSU actually makes sence! So I dont dare to turn it on before I have a replacement PSU (I report get back when I have been out shopping for one)
  5. Cables look fine, temperature fine, controller LSI Logic SAS 9305-24i Host Bus Adapter have connection to all drives.
  6. I stopped the array removed disabled drive 1 and started array without VM & Docker I then added drive 1 and started the array again and it started a rebuild of drive 18 and emulated drive 1 BUT then it stopped and when I looked in the log file I got this: Now its writing error on drive 2 & 6 I have shutdown the server and now I dont know how to proceed? New diagnostic files attached here: !!diagnostics-20230421-1352.zip
  7. So before disk 1 was removed it was named: And now it says (sde) Any input on how I can get my disk back into the array?
  8. Exactly the same thing just happened to me Disk 1 is disabled? (Could be a coincidence?) ERRORS: Emulating two drives during a rebuild! :-( - General Support - Unraid
  9. Thanks! - I really didn't have the budget to buy one right now, but I thought this was the main reason for my other errors and wouldn't risc it crashing. Also I dint know that you couldnt run SMART tests on NVMe devices. Thanks JorgeB you just made my day! - I guess you then run with it until it fails or do you have two in a raid 1 setup?
  10. Hi Everyone I was wondering when replacing my failed cache pool "cache_appdata" if I can do it quick by moving it to my other cache drive? Moving all these small file back to the array and then back again to the new cache drive would take days! If the mover can NOT do this between cache pools could I then do this manually: Stop Dockers and VM's Change pool from cache_appdata to cache_shares Run: rsync -avX /mnt/cache_appdata/ /mnt/cache_shares/ remove failed cache pool "cache_appdata" Insert new replacement cache drive and naming it cache_appdata Run: rsync -avX /mnt/cache_shares/ /mnt/cache_appdata/ Change "select cache pool" back from cache_shares to cache_appdata Start VM & docker Or is the only way to wait for all the files to be copied to the array and back again?
  11. Seem the Western Digital is "Build for a NAS" and have 5100 TB writes under warranty = 5.1 PB so this might be my best option. Looking at the Samsung who failed after 2.1 PB writes. Any recommendation would be most welcome 🙂 Only drawback is that this is gen 3 and they are coming out with gen 5 - but speed isn't everything. reliabilitet is more important Best M.2 NVMe SSD for NAS caching 2023 - NAS Master
  12. Hi All I think my Cache App drive is failing me, My out of memory problems led me to a failed M.2 drive - cant even run a SMART scan? So I need a M.2 drive that will last me a while, and read & writes are extensive! Any recommendations? (DK marked with 4TB drives) Digital RED?
  13. !diagnostics-20230421-1305.zipHi All GHOST IN THE MACHINE? I really need some help to fix my Unraid server, seeing many new errors and not sure what's the cause of it? So far Unraid have ben pretty stable. Most of this just started during a rebuild of a new replacement drive nr. 18 I got this error: And log from Disk 1: But I guess it best to wait for the drive 18 to finish rebuild before trying to do a reboot and rebuild drive 1? Since it looks like I am emulating 2 drives! dISK 1 and disk 18 Drive 1 now shows up under Unassigned drives? On top of this I am getting some strange other strange errors and behaviors: Run out of memory message: Diagnostic attached diagnostics-20230420-1529.zip New Diag disk 1 was removed? !diagnostics-20230421-1305.zip
  14. Yes I updated the driver to 5.30..... - RED arrow to the latest version and got 2 GPU listed). 5.20. to 5.30
  15. I was afraid to loose support for the P2000 but I did the upgrade and it looks great! Now to test it in Docker - Thanks again for your support!
  16. Thanks! I have unbind the card: I seem to have a very old Nvidia driver I just looked at the "old" Syslinux Configuration and I can't see any old stubbing here? To be honest its a very long time since I messed with this so not sure what to look for? Got to love Unraid no other system would let you do these things across VM and dockers 🙂
  17. Hi All Maybe a stupid Q, but I havnet really found any post that isn't very very old on this topic 🙂 I finally got my NVIDIA GeForce RTX 3060 working in a VM running windows 11 But I really need the new encoder from the RTX 3600 GPU for the Tdarr docker, do I need to remove it from the VM in order to use it with docker? (I remember someone said it was enough not to run the VM while using it elsewhere) Do I also have to remove the X from the IOMMU groups? Is it even possible to have two GPU's in the NVIDIA plugin (I have a older P2000 card also and I can only see this one card) Cheers Casperse
  18. Hi All I have tried installing Unmanic 10 times and with different versions and for some reason Library scan seems to be a problem? Managed to get it working for a short time on version 0.1.4 (Looks like this is the one all the youtube videoes are made from?) Anyway some questions - Have any of you mapped a drive from UAD with Read/write slave option? In the log I can see it scans the files but the are not shown on the UI for processing?-New UnmanicLogs.zip (182) Discord | "File scan not working in latest but 0.1.4 works?" | Josh.5's Applications https://discord.com/channels/819327740279914516/1096354979628986409
  19. I found that if I remove my second virtual disc "/mnt/user/Photos_backup/vdisk_photo.qcow2" then it boots? Any Idea on how to fix this? I would hate to create a new disc and move all the files to it again! <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache_appdata/domains/XPEnology_DS3617xs_DSM_7/tinycore-redpill.v0.8.0.0.img' index='3'/> <backingStore/> <target dev='hdc' bus='usb'/> <boot order='1'/> <alias name='usb-disk2'/> <address type='usb' bus='0' port='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/cache_appdata/domains/XPEnology_DS3617xs_DSM_7/vdisk1.qcow2' index='2'/> <backingStore/> <target dev='hdd' bus='sata'/> <alias name='sata1-0-3'/> <address type='drive' controller='1' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/Photos_backup/vdisk_photo.qcow2' index='1'/> <backingStore/> <target dev='hde' bus='sata'/> <alias name='sata1-0-4'/> <address type='drive' controller='1' bus='0' target='0' unit='4'/> </disk>
  20. Hi All I have been running the v. 7.1.0 and I did a full backup of the VM folder on my Uraid. Tried the upgrade to 7.1.1 and it failed, so I copied the old VM's over the existing and started my Synology up again. But for some reason I now hangs during boot any ideas?
  21. Hi All I am running the latest official Plexinc docker and I am now getting crashes and this error: Critical: libusb_init failed I dont have a USB tuner installed I use the proxy tuner from my SAT TV Can I just ignore this error? after some time days the server gets unresponsive and in the log I only se a long string of: Decoder information 249 Diag file attached.plexzone-diagnostics-20230409-1328.zip
  22. Update: I found that you need a version above 7 for Gotenberg and you need to use the IP as endpoints and not the "docker name" on the internal "proxynet" also many variables are missing from the docker if you want Inv. proxy working and also the integrations to the Tika & Gotenberg - BUT IT WORKS NOW! 🙂 Also used the link below to compare the docker conf. paperless-ngx/docker-compose.sqlite-tika.yml at main · paperless-ngx/paperless-ngx (github.com)
  23. Does this still work for you? I can see that the Gotenberg docker now only exists from v7 and upwards? No matter what I do I cant get the Tika and Gotenberg working it looks like it adds some string to the API
  24. I keep getting the "503 Server Error" when trying to convert to pdf with Paperless-ngx, Tika, and Gotenberg? I can see that if I add the values on the docker for Paperless the path is wrong: http://192.168.0.6:3002/forms/libreoffice/convert# So I removed everything after the "http://192.168.0.6:3002" and then I got this: And you can see that it adds "/forms/chromium/convert/html" I read something about that the Tika, and Gotenberg dockers where optimized for paperless ngx? So maybe that's the problem? Anyway have anyone gotten this to work? My configuration. GOTENBERG_ENDPOINT is using port 3002 instead of port 3000.
  25. My Authilia was running perfectly and then suddenly it just wouldn't start? I found that if I stopped the Maria DB it would start. I then restored both the Authilia & Marid DB and it still wouldn't start? SOLUTION The latest (I have auto update enabled for this docker) update of Maria DB introduced an error! Start your Marid DB docker for Authelia (Backup the db first) and go to the Docker terminal and execute: $ mysql_upgrade -u root -p After running this the update is "fixed" and everything works again More about this error can be found here: https://github.com/authelia/authelia/issues/4519