Jump to content

emuhack

Members
  • Posts

    81
  • Joined

  • Last visited

Everything posted by emuhack

  1. 52 Dockers on a 8c16t - do you cpu pin for Dockers and VM? I have a 3900x and I'm curious on how your Dockers perform? I have about 10 Dockers running, and a w10 VM with you passthrough and seems I run out of cores... Lol
  2. Is there anything that i can do on this (see image below) -- its very slow and causing bottlenecks on my VM where my vm disks are.
  3. I took out the drives with the most read errors and put in my new 8tb drive. I have run since 10:30pm CST with no errors... SOOOO my next question is could the 2 drives that had the most errors cause the other drives to error as well? Another user brought up Power Cabling as well as a cause I have three molex from the PSU to a 5 out splitter x 3 --- The original 8tb and parity drive and one 2tb drive was on 1 splitter and the other five 2 tb drives were on its own and the cache pool of SSD's are on another splitter. I now have 2 less drives in the system and no errors as of yet? I have a 760 watt supply and this here: https://outervision.com/power-supply-calculator recommended 485. SO ?
  4. The 8tb drive and Parity on my system is on the same HBA and no errors ?
  5. Attached are the logs from this morning... emu-unraid-diagnostics-20191030-1349.zip
  6. I swapped out the cables and woke up this morning to drive errors. I don't know what is up. I'm beginning to thing that the drives are actually going bad, they all have 45-50k hrs on them. I just orded a 8tb off Amazon with delivery btoday to copy data from the drives. Unless you guys have any other thoughts I hope this fixes it.
  7. I had almost 300 read errors as of 5min ago... I just got the actual ADAPTEC cables... not cheap, but WHATEVER!!! lol I just switched out the cables and will keep you posted!!
  8. 2 hours in and i now have 1 drive with errors new cables come tomorrow and will swap out Oct 28 13:10:58 eMu-Unraid kernel: print_req_error: I/O error, dev sdf, sector 1953515488 Oct 28 13:10:58 eMu-Unraid kernel: md: disk6 read error, sector=1953515424 Oct 28 13:10:58 eMu-Unraid kernel: md: disk6 read error, sector=1953515432 Oct 28 13:10:58 eMu-Unraid kernel: md: disk6 read error, sector=1953515440
  9. I ran the command on all the drives that were reporting errors and it seems to be stable. It has been mounted and i enabled my mover to move over data just to test writes to the array, and it has been error free for 40min now... so we will see... If this stays error free, what would cause the errors in the first place on the disks if the disks were not bad?
  10. root@eMu-Unraid:~# xfs_repair -v /dev/md7 Phase 1 - find and verify superblock... - block cache size set to 752616 entries Phase 2 - using internal log - zero log... zero_log: head block 396 tail block 396 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Mon Oct 28 10:17:12 2019 Phase Start End Duration Phase 1: 10/28 10:17:12 10/28 10:17:12 Phase 2: 10/28 10:17:12 10/28 10:17:12 Phase 3: 10/28 10:17:12 10/28 10:17:12 Phase 4: 10/28 10:17:12 10/28 10:17:12 Phase 5: 10/28 10:17:12 10/28 10:17:12 Phase 6: 10/28 10:17:12 10/28 10:17:12 Phase 7: 10/28 10:17:12 10/28 10:17:12 Total run time: done I ran repair command on disk 7 and this was the output? unless im missing something, it does not look like it did anything ?
  11. Most if not 95% of my plex data is on the 8tb, so the other drives stay spun down 90% of the time... so i ordered these https://www.amazon.com/gp/product/B009APIZFI/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1 and we will see what happens after i swap the cables and i will run the file system check as you mentioned.
  12. I did that once already, im ordering longer cables as these are tight. Its just weird that it just started to happen, and there are no smart errors! I guess how long can I go until this start effecting data? I can get 2tb drives on amazon for $45 new so i may just be replacing them. I also have a 1tb laying around that i may plug into the HBA to make sure its not that as well. I just dont know where to start and im hoping its not 6 drives failing
  13. Here is what is in the log and now I'm up to 277 errors Oct 27 18:16:38 eMu-Unraid kernel: md: disk3 read error, sector=2340611736 Oct 27 18:16:38 eMu-Unraid kernel: md: disk5 read error, sector=2340611736 Oct 27 18:16:38 eMu-Unraid kernel: md: disk6 read error, sector=2340611736 Oct 27 18:16:38 eMu-Unraid kernel: md: disk7 read error, sector=2340611736 Oct 27 18:16:38 eMu-Unraid kernel: md: disk2 read error, sector=2340611744 Oct 27 18:16:38 eMu-Unraid kernel: md: disk3 read error, sector=2340611744 Oct 27 18:16:38 eMu-Unraid kernel: md: disk5 read error, sector=2340611744 Oct 27 18:16:38 eMu-Unraid kernel: md: disk6 read error, sector=2340611744 Oct 27 18:16:38 eMu-Unraid kernel: md: disk7 read error, sector=2340611744 Oct 27 20:16:28 eMu-Unraid kernel: mdcmd (87): spindown 0 Oct 27 20:16:29 eMu-Unraid kernel: mdcmd (88): spindown 1 Oct 27 20:16:30 eMu-Unraid kernel: mdcmd (89): spindown 2 Oct 27 20:16:30 eMu-Unraid kernel: mdcmd (90): spindown 3 Oct 27 20:16:30 eMu-Unraid kernel: mdcmd (91): spindown 5 Oct 27 20:16:31 eMu-Unraid kernel: mdcmd (92): spindown 6 Oct 27 20:16:31 eMu-Unraid kernel: mdcmd (93): spindown 7 Oct 27 20:26:23 eMu-Unraid kernel: mdcmd (94): spindown 4 Oct 27 21:30:20 eMu-Unraid kernel: sd 1:1:6:0: [sdd] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=0x00 Oct 27 21:30:20 eMu-Unraid kernel: sd 1:1:6:0: [sdd] tag#0 CDB: opcode=0x28 28 00 c6 e6 a5 50 00 00 08 00 Oct 27 21:30:20 eMu-Unraid kernel: print_req_error: I/O error, dev sdd, sector 3337004368 Oct 27 21:30:20 eMu-Unraid kernel: md: disk4 read error, sector=3337004304 Oct 27 21:30:30 eMu-Unraid kernel: sd 1:1:4:0: [sdb] tag#6 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=0x00 Oct 27 21:30:30 eMu-Unraid kernel: sd 1:1:4:0: [sdb] tag#6 CDB: opcode=0x28 28 00 c6 e6 a5 50 00 00 08 00 Oct 27 21:30:30 eMu-Unraid kernel: print_req_error: I/O error, dev sdb, sector 3337004368 Oct 27 21:30:30 eMu-Unraid kernel: sd 1:1:5:0: [sdc] tag#5 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=0x00 Oct 27 21:30:30 eMu-Unraid kernel: sd 1:1:5:0: [sdc] tag#5 CDB: opcode=0x28 28 00 c6 e6 a5 50 00 00 08 00 Oct 27 21:30:30 eMu-Unraid kernel: print_req_error: I/O error, dev sdc, sector 3337004368 Oct 27 21:30:30 eMu-Unraid kernel: sd 1:1:15:0: [sdi] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=0x00 Oct 27 21:30:30 eMu-Unraid kernel: sd 1:1:15:0: [sdi] tag#1 CDB: opcode=0x28 28 00 c6 e6 a5 50 00 00 08 00 Oct 27 21:30:30 eMu-Unraid kernel: print_req_error: I/O error, dev sdi, sector 3337004368 Oct 27 21:30:31 eMu-Unraid kernel: sd 1:1:7:0: [sde] tag#7 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=0x00 Oct 27 21:30:31 eMu-Unraid kernel: sd 1:1:7:0: [sde] tag#7 CDB: opcode=0x28 28 00 c6 e6 a5 50 00 00 08 00 Oct 27 21:30:31 eMu-Unraid kernel: print_req_error: I/O error, dev sde, sector 3337004368 Oct 27 21:30:31 eMu-Unraid kernel: md: disk2 read error, sector=3337004304 Oct 27 21:30:31 eMu-Unraid kernel: md: disk3 read error, sector=3337004304 Oct 27 21:30:31 eMu-Unraid kernel: md: disk5 read error, sector=3337004304 Oct 27 21:30:31 eMu-Unraid kernel: md: disk7 read error, sector=3337004304 Oct 27 21:30:31 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0xc6e6a510 len 8 error 5
  14. I posted in the rc threads and was asked to come here and post. after I installed RC 1 through 4. I've been getting drive errors. highway back to 6.7 and it was fine for about 36 hours, then driver errors started showing up. Attached are my diagnostic zip emu-unraid-diagnostics-20191027-1723.zip
  15. I have the same errors now on 6.7.2, not as many rc4 was over 5k over 6 drives now its 80 over 6 drives ... I'm ordering new sata cables tonight... Unless someone has a better thought on this... ct 27 09:43:54 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:43:55 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:43:56 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:43:57 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:43:58 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:43:59 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:01 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:02 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:03 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:04 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:05 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:06 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:07 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:07 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:07 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:07 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:07 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:07 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:08 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:09 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:10 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:11 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:12 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:13 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:15 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:16 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:17 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:18 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:19 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:20 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:22 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:23 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:24 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:25 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:25 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5 Oct 27 09:44:25 eMu-Unraid kernel: XFS (md4): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x89fdce20 len 8 error 5
  16. I have upgraded from RC2 to RC4 - and nothing but issues. I know its a pre release but i would skip this release - HBA issues (RAID bus controller: Adaptec Series 7 6G SAS/PCIe 3) - Drive Errors in the upwords of 5k - I had my docker image corrupt (I had a backup, thank goodness) - my permissions corrupted on my array. - IO errors when writing to the Array I downgraded to 6.7.2 - and will wait
  17. I had everything working on 6.8.0 rc1 for the last 4+- days. Shutdown the server via the GUI I added a 1tb drive to the motherboard sata ports did not touch my HBA card or any of the drives from the array. I booted back up and i see all my cache drives and the 1tb drive, but no drives from the HBA, I shutdown and ran the HBA utility and it sees all the drives and they are all healthy. What gives? attached are the diag zip. Unraid see's the HBA card IOMMU group 19:[9005:028c] 23:00.0 RAID bus controller: Adaptec Series 7 6G SAS/PCIe 3 (rev 01) unraid says at the bottom of the gui: Array Stopped•stale configuration Help Please it looks like unraid see's the disks? SCSI Devices [0:0:0:0]disk Kingston DataTraveler 2.0 1.00 /dev/sda 7.75GB [1:0:0:0]disk ATA SAMSUNG SSD PM85 8D0Q /dev/sdj 256GB [2:0:0:0]disk ATA ST1000DM010-2EP1 CC45 /dev/sdk 1.00TB [4:0:0:0]disk ATA Samsung SSD 850 1B6Q /dev/sdl 250GB [5:0:0:0]disk ATA Samsung SSD 850 2B6Q /dev/sdm 256GB [8:0:0:0]disk ATA Samsung SSD 840 CB6Q /dev/sdn 250GB [9:0:0:0]disk ATA SAMSUNG SSD PM87 3D0Q /dev/sdo 256GB [12:1:8:0]disk ATA WDC WD2003FYPS-2 04.0 /dev/sdb 2.00TB [12:1:9:0]disk ATA WDC WD2003FYPS-2 04.0 /dev/sdc 2.00TB [12:1:10:0]disk ATA WDC WD2003FYPS-2 04.0 /dev/sdd 2.00TB [12:1:11:0]disk ATA WDC WD2003FYPS-2 04.0 /dev/sde 2.00TB [12:1:12:0]disk ATA WDC WD2003FYPS-2 04.0 /dev/sdf 2.00TB [12:1:13:0]disk ATA WDC WD80EMAZ-00W 83.H /dev/sdg 8.00TB [12:1:14:0]disk ATA WDC WD80EMAZ-00W 83.H /dev/sdh 8.00TB [12:1:15:0]disk ATA WDC WD2003FYPS-2 04.0 /dev/sdi 2.00TB --------------------------------------------------------------------- 3900x cpu Micro-Star International Co., Ltd. MPG X570 GAMING PLUS 16gb ram 8tb parity with 20tb other disks 5 ssd drives for cache 71605 HBA card (was working until reboot - sees drives outside of unraid) GTX 1050ti 2 ethernet controllers emu-unraid-diagnostics-20191023-2003.zip
  18. Anybody know if the KERNEL: TUN: UNEXPECTED GSO issue with VM and Dockers being on the same Adpater has been fixed in rc3
  19. If someone can point me in the direction of setting up the Vibr0 to pass to my br0 traffic i can live with a different ip range. I know enough advanced networking to be dangerous... lol Otherwise I will have to go get another NIC
  20. I dont get this when I have my dockers running. I only get the below when i have my win10 vm running. Any thoughts? Oct 17 09:10:36 eMu-Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1308, hdr_len 1374 Oct 17 09:10:36 eMu-Unraid kernel: tun: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ Oct 17 09:10:36 eMu-Unraid kernel: tun: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ Oct 17 09:10:36 eMu-Unraid kernel: tun: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ Oct 17 09:10:36 eMu-Unraid kernel: tun: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ Oct 17 09:10:36 eMu-Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1308, hdr_len 1374 Oct 17 09:10:36 eMu-Unraid kernel: tun: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ Oct 17 09:10:36 eMu-Unraid kernel: tun: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ Oct 17 09:10:36 eMu-Unraid kernel: tun: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ Oct 17 09:10:36 eMu-Unraid kernel: tun: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ................ Oct 17 09:10:36 eMu-Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1308, hdr_len 1374 Oct 17 09:10:36 eMu-Unraid kernel: tun: e4 11 5f aa 32 28 bd eb 33 81 72 78 b4 79 b9 06 .._.2(..3.rx.y.. Oct 17 09:10:36 eMu-Unraid kernel: tun: 94 79 67 11 20 f1 22 7b e0 ea 93 9f 33 8f 1a 91 .yg. ."{....3... Oct 17 09:10:36 eMu-Unraid kernel: tun: c7 1c 0b 59 f7 4b d5 b4 e8 dc 34 2a 1b 91 aa bf ...Y.K....4*.... Oct 17 09:10:36 eMu-Unraid kernel: tun: cf 91 3d ae 3f 28 d2 bb 4a 6a dc 06 e9 5a 6f 8a ..=.?(..Jj...Zo. Oct 17 09:10:36 eMu-Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1308, hdr_len 1374 Oct 17 09:10:36 eMu-Unraid kernel: tun: 07 49 c0 cc 1b 77 a9 be 60 db 20 5d 2e c3 85 4f .I...w..`. ]...O Oct 17 09:10:36 eMu-Unraid kernel: tun: f7 b1 67 e9 64 1f 74 81 ff b5 f9 88 d1 1f e9 99 ..g.d.t......... Oct 17 09:10:36 eMu-Unraid kernel: tun: 5d b9 fc 77 c7 c1 37 11 71 43 62 e9 e7 fc bf 59 ]..w..7.qCb....Y Oct 17 09:10:36 eMu-Unraid kernel: tun: 4d ed a4 17 8f 13 59 ca 18 12 7d 8e 27 99 0b f3 M.....Y...}.'... Oct 17 09:10:37 eMu-Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1308, hdr_len 1374 Oct 17 09:10:37 eMu-Unraid kernel: tun: cb 46 69 df aa 1c 9f bf f7 20 d8 ab 75 59 ad 16 .Fi...... ..uY.. Oct 17 09:10:37 eMu-Unraid kernel: tun: 5b 15 b6 8e 54 71 d2 ee 3a 45 0f 29 62 5e 6e 78 [...Tq..:E.)b^nx Oct 17 09:10:37 eMu-Unraid kernel: tun: fc 9f 1d 3d a2 ea 75 5a 9b 34 2e ce d0 7f 8b 1c ...=..uZ.4...... Oct 17 09:10:37 eMu-Unraid kernel: tun: 7e 92 83 41 60 ce d2 06 13 c3 d2 5e 11 28 99 07 ~..A`......^.(.. Oct 17 09:10:37 eMu-Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1308, hdr_len 1374 Oct 17 09:10:37 eMu-Unraid kernel: tun: 13 c1 5b 4f 70 36 60 1c 09 18 ed 9e 44 19 3b a4 ..[Op6`.....D.;. Oct 17 09:10:37 eMu-Unraid kernel: tun: ed 5a 9d a7 94 e2 4c 15 3f e6 3e 4d cb c7 f8 f9 .Z....L.?.>M.... Oct 17 09:10:37 eMu-Unraid kernel: tun: 89 2f 87 0f 0f 3c f7 5a 96 ff 1a da 04 e1 99 df ./...<.Z........ Oct 17 09:10:37 eMu-Unraid kernel: tun: c4 0b 35 cb a7 e0 23 c4 d7 5d f2 18 72 ce c1 3a ..5...#..]..r..: Oct 17 09:10:38 eMu-Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1308, hdr_len 1374 Oct 17 09:10:38 eMu-Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 17 09:10:38 eMu-Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 17 09:10:38 eMu-Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 17 09:10:38 eMu-Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 17 09:10:38 eMu-Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1308, hdr_len 1374 Oct 17 09:10:38 eMu-Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 17 09:10:38 eMu-Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 17 09:10:38 eMu-Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
  21. Sounds good. I will be getting a 1tb nvme cache drive in the next coming months so im not to worried about filling it up.
  22. Awesome... so let me know if this is correct and would be optimal so my tv shows from sonarr and movies from radarr should be Cache: Yes -- then at the specified setting they will be moved to the array by mover. any share that i want to have speed on should be Cache: yes - then at the specified setting they will be moved to the array by mover. my vm and docker appdata should be Cache: Only -- if i want to backup this data then use the CA Appdata Backup or My Crash Plan pointed at the Cache drive.
  23. Sorry to HiJack... I have a question about this... I have this set up and when have the mover move files to the array weekly that is essentially "backing up" the docker and vm info to the array which is under parity protection?
  24. I went down the path of the 3900x and everything seems to be working for me right now on the CPU and performance wise, i have some docker issues, but not related to the CPU and Mobo :: Build :: AMD Ryzen 3900x (12c/24t) MSI MPG X570 Gaming Plus H100i Pro Liquid AM4 Cooler 16gb DDR-4 GTX 1050Ti
×
×
  • Create New...