doesntaffect

Members
  • Posts

    165
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by doesntaffect

  1. I'll try the headless boot once I get my hands on a 3900. Using a MSI B450M M-ATX atm, so IPMI is not option.
  2. My Mobo has a full x16 slot, however when using an IGPU based CPU I get 8 lanes less in the x16 slot. The IPMI thing sounds interesting since I do not need a GPU at all. My VMs use a virtual one and the system itself is sitting in a closet.
  3. Why dedicated GPU? I am thinking to use a dedicated GPU Because the IGPU is taking up 8 PCIe lanes in my config. For my 2nd cache pool I am using a 4x M.2 Adapter atm in the x16 slot, where I can use 2 NVMEs (8 lanes) only, due to the IGPU. I'd like to upgrade this Cache pool from Raid1 (2x1TB SSDs) to Raid 10 (4x 1TB SSDs). My current CPU is a 3400G (4 Core / IGPU, limited to 30W TDP through Bios settings). I'd like to move to an 8-12 core CPU with max. 65W TDP, ideally 35W like the 4750GE - however all the bigger 8-12 core CPUs with low TDP seem to be unavailble. Bottomline, Ideally I'll be able to get a Ryzen 9 3900 (12-core / 65W) - Would be my preferred solution or stick with an IGPU-CPU like the 4750G(E). A fallback would be the 3700x, limited to 35W. I want more cores, and keep a low TDP Makes sense?
  4. Hi folks, I am looking for advise regarding which passive PCIe x1 card to buy. Atm it looks to me that a passive card like the Nvidia GT 710 seems to be the choice nowadays. Are there any other passive / low power x1 cards know to work well with UnRaid and with a power consumption < 20W, ideally 10W or less. I do not play any games and not not need to pass through a GPU to my VMs. Any advise also on older x1 cards is welcome too.
  5. I think this worked out so far, however there are still 500MB on my first cache pool. When I click Browse, the cache seems to be empty. I asume the 3,56MB on the second pool are related to the file system? Any advise? See attached screenshot.
  6. Appreciate the reactions guys! I heard the community around UnRaid is strong and it appears this seems to be true
  7. This is good advise, thanks guys! I clicked the GUI help icon several times during my trial however only now realized that this injects the help into the GUI.
  8. I'll give it a try tonight and hopefully dont mess this up. Is the data of shares, where I did choose "yes" for the caching option also present on the disk array, once move did his job? Or will there be data left on the cache? My understanding is, that only "Prefer & Only" keep data on the cash beyond a mover run. I still struggle to understand the flow of data between cache and the array. With the increasing usage of SSDs I think this could be a feature ("Cache Migrator") that would be usefull for a lot of users.
  9. Well, okay. 🙃 I admit I had the assumption that since this is a commercial product, that there would be a official support channel which I do not need to spam (and which provides "certified" answers).
  10. I started to build a decent unRaid system, however here and there entcounter questions. Is this forum / the wiki the main or only places to go and ask for support? I think its fair that I cannot expect all answers from this forum, however still I might have questions. Especially in the first months. Given the fact I have to decide whether I want to buy a license in a couple of days I wonder whether there is a commercial service (e.g. support@Unraid.net) available for paying customers?
  11. I installed another set of caching SSDs and want to move my appdata from the old cache to the new. Do I simply change the used cache in the share configuration (from cache to cachex in my case) and hit the mover button? I want to delete the data on the old cache pool later.
  12. I troubleshooted this further and it seems like the Photoprism container which I am running is causing the trouble. The container log says: fatal msg="can't create /photoprism/storage/sidecar: please check configuration and permissions" The container does start, however I cannot access the webinterface or start a terminal - How do I troubleshoot the permissions on UnRaid? Afaik I didnt touch anything regarding the container. Other container like NextCloud are running fine. Trying to start a terminal adds following to the system log: Nov 30 12:10:08 Ryzen nginx: 2020/11/30 12:10:08 [error] 4096#4096: *662414 connect() to unix:/var/tmp/PhotoPrism.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.178.32, server: , request: "GET /dockerterminal/PhotoPrism/token HTTP/1.1", upstream: "http://unix:/var/tmp/PhotoPrism.sock:/token", host: "ryzen", referrer: "http://ryzen/dockerterminal/PhotoPrism/"
  13. well, then it looks like several container are constantly restarting. Is this a default?
  14. -- adjusted the title to better reflect the issue Edit: Below issue seem to be with the Photoprism app / container. Hi folks, since couple of hours my system log is full of these entries. Nov 29 21:18:08 Ryzen kernel: eth0: renamed from veth49cafcb Nov 29 21:18:10 Ryzen kernel: veth49cafcb: renamed from eth0 Nov 29 21:19:11 Ryzen kernel: eth0: renamed from veth0124e07 Nov 29 21:19:13 Ryzen kernel: veth0124e07: renamed from eth0 Nov 29 21:20:15 Ryzen kernel: eth0: renamed from vethb387d3f Nov 29 21:20:17 Ryzen kernel: vethb387d3f: renamed from eth0 Nov 29 21:21:18 Ryzen kernel: eth0: renamed from vethb1a6ac2 Nov 29 21:21:20 Ryzen kernel: vethb1a6ac2: renamed from eth0 Nov 29 21:22:22 Ryzen kernel: eth0: renamed from vetha2266bd Nov 29 21:22:24 Ryzen kernel: vetha2266bd: renamed from eth0 Nov 29 21:23:25 Ryzen kernel: eth0: renamed from veth9335967 Nov 29 21:23:28 Ryzen kernel: veth9335967: renamed from eth0 Nov 29 21:24:05 Ryzen kernel: vethd93d4cd: renamed from eth0 How can I troubleshoot this? How can I get the full system log from last 24hrs on UnRaid? Not sure if this is related, but I have a Pihole instance which I had to stop today since it seem to be broken, however this renaming keept going for hours and hours. I stopped the heimdall container and it seems like the renaming stopped now. Can the renaming have an effect on Pihole? Any advise how to pin this down? Thanks!
  15. I am fairly new to Unraid and want to put Authelia in front of my nextcloud / heimdall. What I understood so far is that the template https://github.com/ibracorp/authelia.xml/blob/master/authelia.xml is meant as a docker template. Please correct me if thats not the case. My question is, how do I get a authelia container set up, which is based on this template? In the CA "Apps" I see only the official authelia container for download. Thanks for any advise
  16. Thanks guys, I swapped the Sata cable and will monitor this further on.
  17. Hi folks, still learning how to manage UnRaid as I move towards the end of my trial period and so far I received 4 CRC errors on one of my disks. Today the 4th error has been reported and I am curious whether I should worry. The disks are brand new, all of the same type. Its my main disk where these erros occur. From the logs below I understand tha there is something going wrong with the disk bus system. I am using WD Red 4TB NAS disks. Any advise? So far I am acknowledging the errors only. I also checked the Sata cables. Would ECC Memory do any trick here? Nov 27 01:58:39 Ryzen kernel: ata2.00: failed command: WRITE FPDMA QUEUED Nov 27 01:58:39 Ryzen kernel: ata2.00: cmd 61/20:a8:10:89:2a/01:00:e9:00:00/40 tag 21 ncq dma 147456 out Nov 27 01:58:39 Ryzen kernel: res 40/00:a0:10:89:2a/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 01:58:39 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 01:58:39 Ryzen kernel: ata2: hard resetting link Nov 27 01:58:39 Ryzen kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 27 01:58:39 Ryzen kernel: ata2.00: configured for UDMA/133 Nov 27 01:58:39 Ryzen kernel: ata2: EH complete Nov 27 02:01:06 Ryzen kernel: ata2.00: exception Emask 0x11 SAct 0x11f000 SErr 0x680100 action 0x6 frozen Nov 27 02:01:06 Ryzen kernel: ata2.00: irq_stat 0x48000008, interface fatal error Nov 27 02:01:06 Ryzen kernel: ata2: SError: { UnrecovData 10B8B BadCRC Handshk } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/f8:60:78:74:b1/00:00:e9:00:00/40 tag 12 ncq dma 126976 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/10:68:38:77:b1/00:00:e9:00:00/40 tag 13 ncq dma 8192 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/20:70:98:77:b1/00:00:e9:00:00/40 tag 14 ncq dma 16384 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/10:78:d8:78:b1/00:00:e9:00:00/40 tag 15 ncq dma 8192 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/10:80:28:86:3f/00:00:e9:00:00/40 tag 16 ncq dma 8192 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/20:a0:d0:64:1e/00:00:e9:00:00/40 tag 20 ncq dma 16384 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2: hard resetting link Nov 27 02:01:07 Ryzen kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 27 02:01:07 Ryzen kernel: ata2.00: configured for UDMA/133 Nov 27 02:01:07 Ryzen kernel: ata2: EH complete
  18. So far the system has been stable for a couple of days after I changed my Cache config from 3 NVMEs (2x512+1x250; the latter in a PCIe 1x converter card) to just 2x512GB. The single NVME has been turned into a separate cache, obviously without any Raid config. The Win10 VM is behaving fine, even with heavy load on the host and the VM over a couple of days. I'll mark this as solved since I cannot provide a copy of the original diagnostics. Thanks guys!
  19. The plugin does not recognize my 3 cache drives (NVMEs). Is that on purpose?
  20. I managed to copy parts of the sys log. Moved the VM from Cache to HDD and so far no issues. Didn't test with a new VM on the SSD cache yet as I am afraid this will trouble the parity disk again. Any further advise? Nov 20 08:30:31 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 2, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 149152 op 0x1:(WRITE) flags 0x1800 phys_seg 3 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 153248 op 0x1:(WRITE) flags 0x1800 phys_seg 35 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 5, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 6, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 673440 op 0x1:(WRITE) flags 0x1800 phys_seg 3 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 7, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 677536 op 0x1:(WRITE) flags 0x1800 phys_seg 35 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 8, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 10, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 156928 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 681216 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS: error (device loop2) in btrfs_commit_transaction:2327: errno=-5 IO failure (Error while writing out transaction) Nov 20 08:30:34 Ryzen kernel: BTRFS info (device loop2): forced readonly Nov 20 08:30:34 Ryzen kernel: BTRFS warning (device loop2): Skipping commit of aborted transaction. Nov 20 08:30:34 Ryzen kernel: BTRFS: error (device loop2) in cleanup_transaction:1898: errno=-5 IO failure Nov 20 08:30:40 Ryzen smbd[24055]: [2020/11/20 08:30:40.654245, 0] ../../source3/smbd/service.c:850(make_connection_snum) Nov 20 08:30:40 Ryzen smbd[24055]: make_connection_snum: '/mnt/user/isos' does not exist or permission denied when connecting to [isos] Error was Input/output error Nov 20 08:30:40 Ryzen smbd[24055]: [2020/11/20 08:30:40.655062, 0] ../../source3/smbd/service.c:850(make_connection_snum) Nov 20 08:30:40 Ryzen smbd[24055]: make_connection_snum: '/mnt/user/isos' does not exist or permission denied when connecting to [isos] Error was Input/output error Nov 20 08:30:40 Ryzen smbd[24055]: [2020/11/20 08:30:40.655821, 0] ../../source3/smbd/service.c:850(make_connection_snum) Nov 20 08:30:40 Ryzen smbd[24055]: make_connection_snum: '/mnt/user/isos' does not exist or permission denied when connecting to [isos] Error was Input/output error
  21. I am still trailing Unraid 6.9b35 and after I managed to get couple of container and shares up and running I created a Win10 VM which seems to kill my parity disk. Symptoms: I spin up the VM and use it to browse the web, suddenly the parity disk turns red, disabled and the sys log is full of errors. I wonder if my cache disk setup could case the issues. The VM is stored on the cache disk. When I stop the array, remove the disk, add in again I can start a parity sync which has run for 3hrs with turbo write (constantly 182Mb/sec) without issues. Starting the VM again ended in the same result as described above. System setup: 3x4 TB WD Red 2x 512GB SSD + 1x 250GB as Cache Pool Raid 1 Any advise how I can troubleshoot this? Diagnostics attached. Thanks all! ryzen-diagnostics-20201120-2201.zip
  22. plugged the stick into a USB2 port and so far things are running fine. I am new UnRaid, coming from Synology and still learn. Thanks for the support!
  23. Thanks, should it be that easy? I'll give it a shot. Shutdown and replugging the stick to another port doesnt mess up anything with shares / container etc.?
  24. Hi folks, I have two failing USB sticks in a couple of days an wonder if there is a pattern. The server (hardware) is brand new and I am running the latest beta. I tried to create a Win 10 VM which failed and suddently Docker and VMs are disabled and the Dashboard looks scrambled and shows following error messages: The registration page says: Error code: ENOFLASH3 I stopped the Array. Main doesnt show the 3 Cache SSDs anymore, however the Dashboard does show the cache. Any advise?