tdotr6

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by tdotr6

  1. It's clear this is caused by the recent patch where unraid identifies they've resolved the macvlan issue.. I've been on macvlan for many years with this hardware and it's stable. Now that I've had issues after patch, and as you @JorgeBsuggested, I've changed to ipvlan and we're stable with 0 errors for 17 days. Unraid should be responding to this as I've not been the only one experiencing this since the recent patch.
  2. Anything here to indicate he has potentially bad memory or this the go to suggestion around here?
  3. This is posted above, yes that was indeed a screenshot showing he didn't even look at it yet continued to provide bad advice. @dirkinthedark already did that , and agreed issues with that as well. SEEMS for now, although I have ran macvlan for many years, 2 updates ago they did a big change around this, they clearly broke it for the ones that were working and cause some kernel panics and other issuues. since I have been on ipvlan for a few days, no issues. I very much do want to look at going to TrueNAS once coral support is avail, i'm jumping ship.
  4. A moderator is indeed much a representative of Limetech my dude. What? Still haven't checked out the secondary Diag posted. Great support from the peers lol.. Thank mod's I am going to take this time and move myself to TrueNAS Scale. This was the push I needed. Thank you.
  5. What's with the attituded? You've initially neglected to ignore I already did a mem test that PASSED before I came to this post. and I have as well posted diag dump from a few weeks prior when the system had the same issue and I remembered to get the dump before I rebooted. Why ignore it? Why ignore the extra logs I am giving you to make your product better? It is very interesting with the amount of logs I've provided you , you've yet to still review the diagnostics provided from the earlier crash. And I've provided logs back 30 days of all BTRFS errors /checksum and kernel .. Yet, you're going to reply with it's my right? I guess it's your right to not actually want to help people who have been a dedicated user, supporter and patron of this product for nearly a decade.. and the first time I come looking for help this is how i'm treated.. Maybe it's just time to look at unRAID alternatives.
  6. I think I will take 2 sticks of RAM from a machine i've had since 2017 and swap it out. That said, I have looked at some of my dashboards , it looks like the crash happens on the first svr-diagnostics-20231222-0956.zip @ 2:56 AM . Dec 22 02:56:31 SVR kernel: BTRFS critical (device sdf1): corrupt leaf: root=5 block=272887431168 slot=110 ino=5203363 file_offset=229376, invalid ram_bytes for file extent, have 65535, should be aligned to 4096 Dec 22 02:56:31 SVR kernel: BTRFS info (device sdf1): leaf 272887431168 gen 51281 total ptrs 194 free space 16 owner 5 Dec 22 02:56:31 SVR kernel: item 0 key (5203347 1 0) itemoff 16123 itemsize 160 Dec 22 02:56:31 SVR kernel: inode generation 40834 size 42 mode 40755 I must as as well, do you see a similarity in the crash in svr-diagnostics-20231222-0956.zip & svr-diagnostics-20231222-1126.zip both were taken before a reboot while experiencing the issue. such, container errors started and I was unable to start a stopped container again. I would suspect if its RAM I would see similar but I am not exp. looking at these logs like you are.. I don't see the similarity in the crashes to point to a RAM (or other hardware) issue. @JorgeB - https://pastebin.com/BKKBwb3a You ref the checksum errors as being the thing for me to focus on but I must disagree. I have syslogs I am exporting so I don't have any loss of details from the last 30 days... it just doesn't seem to be the smoking gun you've pointed it out to be?
  7. Yes, I understand that I am just pointing out it's new as well as tested. Are you ref to these checkum errors? Any other tests you recommended to test board/cpu?
  8. I've changed to ipvlan and will monitor.. thank you @itimpi and @JorgeB svr-diagnostics-20231222-1126.zip
  9. mem test has been done and I don't have bad RAM , this is fairly new RAM as well. I've also rebuild the cache pool a few weeks ago. when I was first experiencing errors I rebuild and I replaced 2 SSD's with brand new ones in the cache pool because I thought maybe the SSD's were dying as they were 6 years old. Now all SSD's are very new in the pool of 4 . As well my friend is getting the very very similar ( corrupt 8 instead of 12 ) errors on his server and he is on totally different hardware. I can have him as well post his dumps to this post. Same thing with the containers errors in the AM etc. Server can run fine for a few days with no issues and then all of a sudden issues. Agian, system was VERY stable before upgrading from .4 is there a way we can just roll back to before .4? 2 upgrade versions before.. I only upgraded again as I thought it would fix the issues, I should have rolled back then. RE: First thing change docker network to ipvlan and reboot. Will do.
  10. Hello unRAID Team! I have had nothing but stability with my system for the last several years, rock solid. Recently encountering errors every day, or every other day.. I can't figure out a pattern or trigger. I have another friend who is having almost identical issues and the only thing we have done recently is upgrade to 6.12.6. We run almost the same containers as well, and our hardware is different, he is on a later gen intel and ddr5 , I am still using ddr4. I have attached 2 dumps of 2 deferent times of when I have experienced the following. Check dashboard in AM and find a 1 or 2 containers stopped - Try to start them and unable to. They are never the same containers that are experiencing this. If I stop another container now, it wont' start and even thou it looks like the others are running they don't work correctly.. for instance, grafana is running but I can't get to the dashboard - I get {"traceID":""} , but I am able to get to Frigate and access all of my cameras and footage.. It's very strange! A reboot and it all seems to be fine again.. no ongoing errors, until the issue repeats. checking syslogs today I do see repeated svr-diagnostics-20231216-1719.zip svr-diagnostics-20231222-0956.zip
  11. Can you PLEASE... add an option to not remove the recycle bin folder when it's empty.. This is causing issues with programs such as Sonarr that are not smart enough to just 'delete' , looks like it's doing more of a move function thus causing lots of errors when unraid recycle bin purges.. things only work well again when I delete a manual file so the ./recycle bin folder comes back..
  12. Hey I totally forgot about this post! Where did you end up posting the document? Thanks again.
  13. Thanks for the help guys, it was running the tvheadend plugin I was running.... So I have moved to a docker and it's all working now.. For anyone in the future that want's to know the setup.. Installed : linuxserver/tvheadend:latest / huxy/xmltv-sd-json:latest & Plugin unRAID DVB Edition tvheadend docker -> https://ss.myvpn.me/636c9cRS.png ( add: --device=/dev/dvb/ ) XMLTVSchedulesDirect docker -> https://ss.myvpn.me/718441RS.png tvheadend config -> https://ss.myvpn.me/4b36d0RS.png Finished setup -> https://ss.myvpn.me/396d34RS.png
  14. Thanks for the reply gents... So what I find is the XMLTV module wont stay enabled, I enable it and it deletes the xmltv.sock that was created by the docker so I know it's reading that right dir... but nothing I do can keep it enabled.. removed link for bw, just dont use the old plugin for tvheadend on unraid.. use the docker see end of post.
  15. Hey I am a little confused, I have things running correctly on the docker I think but I can't seem to get it to work with TV Headend.. Not sure if I might have the directory structure wrong? This is my tv headend config -> https://ss.myvpn.me/d5e7f7RS.png This is what my epggrab dir looks like -> https://ss.myvpn.me/9a9d44RS.png This is my docker config -> https://ss.myvpn.me/12ea7bRS.png So I can get the docker to generate the xmltv.sock and the docker stays running and looks to be working correctly. But I can't get TV Headend to read anything, and when I select XMLTV and hit save it deletes the xlmtv.sock that was generated by the docker and if I restart the docker to re-generate it the file TVheadend still doesn't show anything , I have restarted TVheadend also and still no luck.. what am I doing wrong?
  16. Yea, It seemed to be power related in my case, no errors for 24 hours now and none of the links have reset so I think things are looking good.. I'll go and upgrade the PSU to 750W from the 500W I have it in currently..
  17. Maxed out the post.. was going to edit... Well anyways I rebooted this is the latest logs from Cache Drives 2 / 3. Good news is no red so far? 3 Yellow errors I have put in bold. Drive 2 Mar 28 18:59:10 CORE kernel: ata1: SATA max UDMA/133 abar m2048@0xf4712000 port 0xf4712100 irq 28 Mar 28 18:59:10 CORE kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 28 18:59:10 CORE kernel: ata1.00: READ LOG DMA EXT failed, trying unqueued Mar 28 18:59:10 CORE kernel: ata1.00: failed to get NCQ Send/Recv Log Emask 0x1 Mar 28 18:59:10 CORE kernel: ata1.00: ATA-9: Samsung SSD 840 Series, S14CNEACA47078H, DXT09B0Q, max UDMA/133 Mar 28 18:59:10 CORE kernel: ata1.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Mar 28 18:59:10 CORE kernel: ata1.00: failed to get NCQ Send/Recv Log Emask 0x1 Mar 28 18:59:10 CORE kernel: ata1.00: configured for UDMA/133 Mar 28 18:59:10 CORE kernel: ata1.00: Enabling discard_zeroes_data Mar 28 18:59:10 CORE kernel: sd 1:0:0:0: [sdb] 234441648 512-byte logical blocks: (120 GB/112 GiB) Mar 28 18:59:10 CORE kernel: sd 1:0:0:0: [sdb] Write Protect is off Mar 28 18:59:10 CORE kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Mar 28 18:59:10 CORE kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 28 18:59:10 CORE kernel: ata1.00: Enabling discard_zeroes_data Mar 28 18:59:10 CORE kernel: sdb: sdb1 Mar 28 18:59:10 CORE kernel: ata1.00: Enabling discard_zeroes_data Mar 28 18:59:10 CORE kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Mar 28 18:59:22 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 18:59:22 CORE emhttp: import 5 cache device: sdb Mar 28 18:59:23 CORE emhttp: shcmd (7): /usr/sbin/hdparm -S0 /dev/sdb &> /dev/null Mar 28 18:59:23 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 18:59:23 CORE emhttp: import 5 cache device: sdb Mar 28 19:00:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:00:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:01:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:01:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:02:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:02:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:03:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:03:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:04:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:04:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:05:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:05:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:06:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:06:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:07:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:07:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:08:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:08:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:09:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:09:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:10:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:10:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:11:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:11:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:12:01 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:12:01 CORE emhttp: import 5 cache device: sdb Mar 28 19:12:42 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:12:42 CORE emhttp: import 5 cache device: sdb Mar 28 19:12:46 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 19:12:46 CORE emhttp: import 5 cache device: sdb Mar 28 19:12:46 CORE emhttp: shcmd (68): /usr/sbin/hdparm -S0 /dev/sdb &> /dev/null Mar 28 19:12:46 CORE kernel: BTRFS: device fsid 640b4483-c816-4562-b098-6b90c83f57ef devid 1 transid 376462 /dev/sdb1 Mar 28 19:12:47 CORE kernel: BTRFS: bdev /dev/sdb1 errs: wr 0, rd 0, flush 0, corrupt 2, gen 2 Drive 3 Mar 28 18:59:10 CORE kernel: ata7: SATA max UDMA/133 abar m512@0xf4610000 port 0xf4610100 irq 30 Mar 28 18:59:10 CORE kernel: ata7: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 28 18:59:10 CORE kernel: ata7.00: supports DRM functions and may not be fully accessible Mar 28 18:59:10 CORE kernel: ata7.00: disabling queued TRIM support Mar 28 18:59:10 CORE kernel: ata7.00: ATA-9: Samsung SSD 850 PRO 128GB, S24ZNXAGB10425J, EXM02B6Q, max UDMA/133 Mar 28 18:59:10 CORE kernel: ata7.00: 250069680 sectors, multi 1: LBA48 NCQ (depth 31/32), AA Mar 28 18:59:10 CORE kernel: ata7.00: supports DRM functions and may not be fully accessible Mar 28 18:59:10 CORE kernel: ata7.00: disabling queued TRIM support Mar 28 18:59:10 CORE kernel: ata7.00: configured for UDMA/133 Mar 28 18:59:10 CORE kernel: sd 7:0:0:0: [sdh] 250069680 512-byte logical blocks: (128 GB/119 GiB) Mar 28 18:59:10 CORE kernel: sd 7:0:0:0: [sdh] Write Protect is off Mar 28 18:59:10 CORE kernel: sd 7:0:0:0: [sdh] Mode Sense: 00 3a 00 00 Mar 28 18:59:10 CORE kernel: sd 7:0:0:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 28 18:59:10 CORE kernel: sdh: sdh1 Mar 28 18:59:10 CORE kernel: sd 7:0:0:0: [sdh] Attached SCSI disk Mar 28 18:59:22 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 18:59:22 CORE emhttp: import 6 cache device: sdh Mar 28 18:59:23 CORE emhttp: shcmd (: /usr/sbin/hdparm -S0 /dev/sdh &> /dev/null Mar 28 18:59:23 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 18:59:23 CORE emhttp: import 6 cache device: sdh Mar 28 19:00:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:00:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:01:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:01:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:02:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:02:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:03:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:03:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:04:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:04:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:05:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:05:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:06:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:06:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:07:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:07:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:08:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:08:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:09:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:09:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:10:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:10:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:11:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:11:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:12:01 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:12:01 CORE emhttp: import 6 cache device: sdh Mar 28 19:12:42 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:12:42 CORE emhttp: import 6 cache device: sdh Mar 28 19:12:46 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 19:12:46 CORE emhttp: import 6 cache device: sdh Mar 28 19:12:46 CORE emhttp: shcmd (69): /usr/sbin/hdparm -S0 /dev/sdh &> /dev/null Mar 28 19:12:46 CORE kernel: BTRFS: device fsid 640b4483-c816-4562-b098-6b90c83f57ef devid 3 transid 376462 /dev/sdh1 Mar 28 19:12:47 CORE kernel: BTRFS: bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 3702, gen 0
  18. Hi unRAID Community, I hope I can get some help here with my BTRFS Cache pool.. I never had any errors or issues and no changes to the system hardware or software wise in months. ( Just regular unRAID Updates) , Last week I had an issue with one my dockers, I went to restart it and the docker refused to come back online.. No errors seen at that time in the docker logs.. After trying a few more apps and all not wanting to restart.. I restarted the docker engine... My docker imaged seemed broken as if I pointed to a new image file the docker engine would start, not to concerned at the time I just kept the new image and re-installed a few dockers, took me only a few minutes.. Well yesterday I had the same issue, docker app was not working.. Went to restart and well I started to dig into different errors and I found a few on my Cache 2 and 3 drives.. I don't have the original errors anymore as I have rebooted but my SATA Link was restarting and speeds were changing from 6 Gbps on startup to 3 and then finally 1.5 , It was very strange.. The only 2 things in common between the 2 drives is they shared the same power split.. This being one of the mentioned symptoms of such errors I added a temp power supply for the 2 SSDs and it seems to of stopped as many errors in the 2 cache drive logs.. I have now ran a scrub , twice. I am still seeing some errors in the logs, I know some are from when I ran the scrub.. I think I should do another reboot to clear them again but I want to understand more about the errors and the nature of the corruption since I had 2 drives with issues that's the reason my docker image was corrupting ? Any way to use BTRFS Snapshots to recover next time? Edit: TL;DR Dockers crashed, Made a new img file, Reinstalled dockers pointed to existing paths, dockers crashed again last night. Errors on 2 SSD's out of the 4 Drive cache pool. The 2 drives do not share the same SATA Controllers one is in a PCI-E Card and the other is on the mobo. They did share the same daisy chained power source off of a some fans ( yea I know I was short for connections ) and now temporarily share power still but from a dedicated external molex power supply.. Just temp until I put a new PSU next week.. Drives 1 and 4 do not have any logs, only 2 and 3 are generating output. ------------ Cache Drive 2 Errors --------- Mar 28 17:37:59 CORE kernel: ata1: SATA max UDMA/133 abar m2048@0xf4712000 port 0xf4712100 irq 28 Mar 28 17:37:59 CORE kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 28 17:37:59 CORE kernel: ata1.00: READ LOG DMA EXT failed, trying unqueued Mar 28 17:37:59 CORE kernel: ata1.00: failed to get NCQ Send/Recv Log Emask 0x1 Mar 28 17:37:59 CORE kernel: ata1.00: ATA-9: Samsung SSD 840 Series, S14CNEACA47078H, DXT09B0Q, max UDMA/133 Mar 28 17:37:59 CORE kernel: ata1.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Mar 28 17:37:59 CORE kernel: ata1.00: failed to get NCQ Send/Recv Log Emask 0x1 Mar 28 17:37:59 CORE kernel: ata1.00: configured for UDMA/133 Mar 28 17:37:59 CORE kernel: ata1.00: Enabling discard_zeroes_data Mar 28 17:37:59 CORE kernel: sd 1:0:0:0: [sdb] 234441648 512-byte logical blocks: (120 GB/112 GiB) Mar 28 17:37:59 CORE kernel: sd 1:0:0:0: [sdb] Write Protect is off Mar 28 17:37:59 CORE kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Mar 28 17:37:59 CORE kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 28 17:37:59 CORE kernel: ata1.00: Enabling discard_zeroes_data Mar 28 17:37:59 CORE kernel: sdb: sdb1 Mar 28 17:37:59 CORE kernel: ata1.00: Enabling discard_zeroes_data Mar 28 17:37:59 CORE kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Mar 28 17:38:11 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 17:38:11 CORE emhttp: import 5 cache device: sdb Mar 28 17:38:12 CORE emhttp: shcmd (7): /usr/sbin/hdparm -S0 /dev/sdb &> /dev/null Mar 28 17:38:12 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 17:38:12 CORE emhttp: import 5 cache device: sdb Mar 28 17:38:41 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 17:38:41 CORE emhttp: import 5 cache device: sdb Mar 28 17:38:41 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 17:38:41 CORE emhttp: import 5 cache device: sdb Mar 28 17:38:59 CORE emhttp: Samsung_SSD_840_Series_S14CNEACA47078H (sdb) 117220824 Mar 28 17:38:59 CORE emhttp: import 5 cache device: sdb Mar 28 17:38:59 CORE emhttp: shcmd (32): /usr/sbin/hdparm -S0 /dev/sdb &> /dev/null Mar 28 17:38:59 CORE kernel: BTRFS: device fsid 640b4483-c816-4562-b098-6b90c83f57ef devid 1 transid 376401 /dev/sdb1 Mar 28 17:39:00 CORE kernel: BTRFS info (device sdb1): disk space caching is enabled Mar 28 17:40:43 CORE kernel: BTRFS warning (device sdb1): failed to load free space cache for block group 1022909087744, rebuild it now Mar 28 17:40:45 CORE kernel: BTRFS (device sdb1): parent transid verify failed on 1017575800832 wanted 376102 found 376100 Mar 28 17:40:45 CORE kernel: BTRFS: read error corrected: ino 1 off 1017575800832 (dev /dev/sdb1 sector 81860160) Mar 28 17:40:45 CORE kernel: BTRFS: read error corrected: ino 1 off 1017575804928 (dev /dev/sdb1 sector 81860168) Mar 28 17:40:45 CORE kernel: BTRFS: read error corrected: ino 1 off 1017575809024 (dev /dev/sdb1 sector 81860176) Mar 28 17:40:45 CORE kernel: BTRFS: read error corrected: ino 1 off 1017575813120 (dev /dev/sdb1 sector 81860184) Mar 28 17:41:10 CORE kernel: BTRFS (device sdb1): parent transid verify failed on 1017576390656 wanted 376102 found 375923 Mar 28 17:41:10 CORE kernel: BTRFS: read error corrected: ino 1 off 1017576390656 (dev /dev/sdb1 sector 81861312) Mar 28 17:41:10 CORE kernel: BTRFS: read error corrected: ino 1 off 1017576394752 (dev /dev/sdb1 sector 81861320) Mar 28 17:41:10 CORE kernel: BTRFS: read error corrected: ino 1 off 1017576398848 (dev /dev/sdb1 sector 81861328) Mar 28 17:41:10 CORE kernel: BTRFS: read error corrected: ino 1 off 1017576402944 (dev /dev/sdb1 sector 81861336) Mar 28 17:48:08 CORE kernel: BTRFS: checksum/header error at logical 1018290290688 on dev /dev/sdb1, sector 83255648: metadata leaf (level 0) in tree 5 Mar 28 17:48:08 CORE kernel: BTRFS: checksum/header error at logical 1018290290688 on dev /dev/sdb1, sector 83255648: metadata leaf (level 0) in tree 5 Mar 28 17:48:08 CORE kernel: BTRFS: bdev /dev/sdb1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 Mar 28 17:48:08 CORE kernel: BTRFS: checksum/header error at logical 1017705529344 on dev /dev/sdb1, sector 82113536: metadata leaf (level 0) in tree 7 Mar 28 17:48:08 CORE kernel: BTRFS: checksum/header error at logical 1017705529344 on dev /dev/sdb1, sector 82113536: metadata leaf (level 0) in tree 7 Mar 28 17:48:08 CORE kernel: BTRFS: bdev /dev/sdb1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 1 Mar 28 18:04:18 CORE kernel: BTRFS: checksum/header error at logical 1018290290688 on dev /dev/sdb1, sector 83255648: metadata leaf (level 0) in tree 5 Mar 28 18:04:18 CORE kernel: BTRFS: checksum/header error at logical 1017705529344 on dev /dev/sdb1, sector 82113536: metadata leaf (level 0) in tree 7 Mar 28 18:04:18 CORE kernel: BTRFS: checksum/header error at logical 1017705529344 on dev /dev/sdb1, sector 82113536: metadata leaf (level 0) in tree 7 Mar 28 18:04:18 CORE kernel: BTRFS: bdev /dev/sdb1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 2 Mar 28 18:04:18 CORE kernel: BTRFS: checksum/header error at logical 1018290290688 on dev /dev/sdb1, sector 83255648: metadata leaf (level 0) in tree 5 Mar 28 18:04:18 CORE kernel: BTRFS: bdev /dev/sdb1 errs: wr 0, rd 0, flush 0, corrupt 2, gen 2 Mar 28 18:22:28 CORE kernel: BTRFS warning (device sdb1): csum failed ino 1561639 off 323584 csum 3617176600 expected csum 3648121054 ------------ Cache Drive 3 Errors --------- Mar 28 17:37:59 CORE kernel: ata7: SATA max UDMA/133 abar m512@0xf4610000 port 0xf4610100 irq 30 Mar 28 17:37:59 CORE kernel: ata7: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 28 17:37:59 CORE kernel: ata7.00: supports DRM functions and may not be fully accessible Mar 28 17:37:59 CORE kernel: ata7.00: disabling queued TRIM support Mar 28 17:37:59 CORE kernel: ata7.00: ATA-9: Samsung SSD 850 PRO 128GB, S24ZNXAGB10425J, EXM02B6Q, max UDMA/133 Mar 28 17:37:59 CORE kernel: ata7.00: 250069680 sectors, multi 1: LBA48 NCQ (depth 31/32), AA Mar 28 17:37:59 CORE kernel: ata7.00: supports DRM functions and may not be fully accessible Mar 28 17:37:59 CORE kernel: ata7.00: disabling queued TRIM support Mar 28 17:37:59 CORE kernel: ata7.00: configured for UDMA/133 Mar 28 17:37:59 CORE kernel: sd 7:0:0:0: [sdh] 250069680 512-byte logical blocks: (128 GB/119 GiB) Mar 28 17:37:59 CORE kernel: sd 7:0:0:0: [sdh] Write Protect is off Mar 28 17:37:59 CORE kernel: sd 7:0:0:0: [sdh] Mode Sense: 00 3a 00 00 Mar 28 17:37:59 CORE kernel: sd 7:0:0:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 28 17:37:59 CORE kernel: sdh: sdh1 Mar 28 17:37:59 CORE kernel: sd 7:0:0:0: [sdh] Attached SCSI disk Mar 28 17:38:11 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 17:38:11 CORE emhttp: import 6 cache device: sdh Mar 28 17:38:12 CORE emhttp: shcmd (: /usr/sbin/hdparm -S0 /dev/sdh &> /dev/null Mar 28 17:38:12 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 17:38:12 CORE emhttp: import 6 cache device: sdh Mar 28 17:38:41 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 17:38:41 CORE emhttp: import 6 cache device: sdh Mar 28 17:38:41 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 17:38:41 CORE emhttp: import 6 cache device: sdh Mar 28 17:38:59 CORE emhttp: Samsung_SSD_850_PRO_128GB_S24ZNXAGB10425J (sdh) 125034840 Mar 28 17:38:59 CORE emhttp: import 6 cache device: sdh Mar 28 17:38:59 CORE emhttp: shcmd (33): /usr/sbin/hdparm -S0 /dev/sdh &> /dev/null Mar 28 17:38:59 CORE kernel: BTRFS: device fsid 640b4483-c816-4562-b098-6b90c83f57ef devid 3 transid 376401 /dev/sdh1 Mar 28 17:43:47 CORE kernel: BTRFS: checksum error at logical 1000221642752 on dev /dev/sdh1, sector 3925128, root 5, inode 1426592, offset 7262208, length 4096, links 1 (path: appdata/plexpassmediaserver/Plex Media Server/Media/localhost/e/4b7e5a96770f020794acd4624d6c7fce2c6a9dc.bundle/Contents/Indexes/index-sd.bif) Mar 28 17:43:47 CORE kernel: BTRFS: checksum error at logical 1000220659712 on dev /dev/sdh1, sector 3923208, root 5, inode 1426592, offset 6279168, length 4096, links 1 (path: appdata/plexpassmediaserver/Plex Media Server/Media/localhost/e/4b7e5a96770f020794acd4624d6c7fce2c6a9dc.bundle/Contents/Indexes/index-sd.bif) Mar 28 17:43:47 CORE kernel: BTRFS: bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 Mar 28 17:43:47 CORE kernel: BTRFS: bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0 Mar 28 17:43:47 CORE kernel: BTRFS: checksum error at logical 1000221151232 on dev /dev/sdh1, sector 3924168, root 5, inode 1426592, offset 6770688, length 4096, links 1 (path: appdata/plexpassmediaserver/Plex Media Server/Media/localhost/e/4b7e5a96770f020794acd4624d6c7fce2c6a9dc.bundle/Contents/Indexes/index-sd.bif) Mar 28 18:07:41 CORE kernel: BTRFS: bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 3031, gen 0 Mar 28 18:08:42 CORE kernel: BTRFS: checksum error at logical 1178651320320 on dev /dev/sdh1, sector 202069232, root 5, inode 1561639, offset 323584, length 4096, links 1 (path: appdata/plexpassmediaserver/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db-wal) Mar 28 18:08:42 CORE kernel: BTRFS: bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 3701, gen 0 Mar 28 18:08:42 CORE kernel: BTRFS: bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 3702, gen 0 Mar 28 18:22:28 CORE kernel: BTRFS: read error corrected: ino 1561639 off 323584 (dev /dev/sdh1 sector 202069232)
  19. Hey , I was taking a look at my stats tonight just bored and I saw some odd spikes.. Is the reading wrong ? Spikes from the VM maybe? I have dual NIC on a LAGG but 17.xGB/s is not right from the network side..
  20. Yes, I am going to have to break this into a series of posts because it's too much to cover in just one. Honestly been bogged down with other stuff since I got back from CES, but I really need to find some time to finish that up. Sorry to be a ball buster but I am very interested! Is this still being worked on?
  21. Hey, Did you finish the PDF By chance? Still really interested to know how this was done.
  22. I am still confused and would love to understand this, Can someone help out with a little guidance? What I think I understand is he is using btrfs snapshots , but I don't understand how he is using 1 image.. I have tried to do this today and failing, I try to boot the 2 VM's and the one VM will crash.. I tried to use your cp /mnt/user/vdisk/windows /mnt/user/vdisk/windows1 --reflink , but still no luck..
  23. Little confused with the cache drive space size I have looked around but nothing gave me my exact answer. If I have 2 Cache drives, at 128 GB , I would get 128 GB of space = Correct? If I replaced a 128GB with 256GB and kept a 128GB how much space would I see usable? If I added another 128GB on top of the 2 128GB drives currently how much space would I see usable? Thank you in advanced.