Jerky_san

Members
  • Posts

    447
  • Joined

Everything posted by Jerky_san

  1. If we can let's start with the obvious stuff that I gotta ask(sorry if you have done any of this) honestly don't know the exact location in the diagnostic hence the questions. Have you done CPU isolation in settings, CPU pinning? Does it stutter badly or just frame drops? Are you using GSYNC?
  2. I've tried setting it to a min I feel comfortable with but do you happen to know what happens if it does try to write and it can't fit it? Does it retry or something or can it be catastrophic?
  3. Is there a document or something that talks about things like this? Until yesterday I had no idea Unraid can switch between reconstruct write and read/modify/write mode without me explicitly telling it to. Do you know if there are other settings that can turn on/off when the system deems it required? Anyways would be nice to know of anything else so I can remember it if I'm ever trying to debug an issue.
  4. Just a quick question.. are your drives mounted via the Fujitsu raid card or via onboard? Also is there some sort of expander or something?
  5. You might could try this by I'd caution to wait for others to chime in and maybe they know something else to try. Click the little angled arrow at the top if for some reason you can't click the actual link. This may help you.. I'm unsure.. As said I'd wait and see if others know more.
  6. Sigh, I feel like I am out of my depth on this one. The network graph seems to show good performance there though I would kind of expect close to line speed. But you also said using the mover causes it to write to the array very rapidly. I am hoping someone else will chime in that can better help you. I can say looking at your sys log it's jammed packed full of these errors. It's interesting because I don't see any drives labeled disk29 or disk0. Jul 6 02:49:28 Jarvis kernel: md: do_drive_cmd: disk29: ATA_OP e0 ioctl error: -5 Jul 6 02:49:29 Jarvis kernel: mdcmd (5077): spindown 0 Jul 6 02:49:29 Jarvis emhttpd: error: mdcmd, 2723: Input/output error (5): write Jul 6 02:49:29 Jarvis kernel: md: do_drive_cmd: disk0: ATA_OP e0 ioctl error: -5 Jul 6 02:49:29 Jarvis kernel: mdcmd (5078): spindown 29 Jul 6 02:49:29 Jarvis emhttpd: error: mdcmd, 2723: Input/output error (5): write Jul 6 02:49:29 Jarvis kernel: md: do_drive_cmd: disk29: ATA_OP e0 ioctl error: -5
  7. Would you happen to know what file system it had? XFS or BTRFS?
  8. When it happens again try to ping the gateway(router) from the wireless access points and unraid itself. Also what kind of router are you using?
  9. If it weren't for the ram spiking and dumping to cache after I'd want to say network issue but the ram spiking/dumping is throwing me..
  10. If your willing to break your cache up try running one formatted as xfs if a single btrfs doesn't work. Tbh I rarely if ever leave real data on the cache drive for long. I have a mover schedule that moves it every night. I used to also have btrfs but had hang ups and stuff that were all alleviated by going to xfs
  11. Hmm that is strange. I kind of wonder if it's not a btrfs issue but I know a few people run mirrored pair cache drives on btrfs without issue. The fact that it waits before dumping to the cache drive is pretty odd. Weird thing is to my knowledge there isn't a lot of tuning to the cache you can do as side from changing the format. I kind of wonder what would happen if you wrote a file to cache and copied it from within unraid to the cache again. This would rule out networking and a few other things at least. Maybe someone else can chime in as well.
  12. When you do the transfers on the main page does the write speeds on the cache reflect what you are seeing as well? How big of files are you testing with? The pictures a interesting.
  13. There is a few ways but one is to reboot and when the boot loader displays go to the memory check option at the bottom called "memtest" I believe. You can also make a Linux live usb boot for the purpose but I'd give memtest a try first. How old is the hardware? Also is it overclocked at all?
  14. It is kind of the luck of the draw to be honest. I have one that has 8 and has for a year or so. Never failed though and my check sums are always proper on my data. If it ticks up more I'd start to get nervous I'd say. They are both 3 years so infant mortality shouldn't be a problem though if they've gotten hot or something it could make their life less.
  15. Jul 6 10:39:54 Monster kernel: mce: [Hardware Error]: Machine check events logged Jul 6 10:39:54 Monster kernel: mce: [Hardware Error]: CPU 9: Machine Check: 0 Bank 5: be00000000800400 Jul 6 10:39:54 Monster kernel: mce: [Hardware Error]: TSC 0 ADDR 3fff810865fd MISC 7fff I found that in your log. It seems like memory or maybe CPU. Might try reseating your CPU if you run a memory check for a while and not have any memory errors. I'd make sure to run it for a while. Also make sure to check the pins on the board side if you do the reseat just to make sure there aren't any bent ones. Also I'd wait for others to chime in since I'm not 100% certain or anything.
  16. I do have 32 gigs so it most definitely could read at least two if not three in. I am going to test one last thing because I think I realize whats going on. Only reason it matters to me is because I've had my array near packed before but I never remember this occurring in any other time. The two drives are nearly exactly the same amount of space free 1.96TB. So I wonder if it is doing what you said but only because they are the same fullness so it opens the gate to allow it. Edit: So tested it using unbalance to make the two drives different free space sizes. It does indeed work like I always remembered it doing. Where it would write to one drive at a time. I guess somehow I used mover to move files that made the two drives the exact same free space which I guess makes the mover initiate two moves instead of just one move at a time. The files I guess were similar size or so close in size that it would bounce between the two drives since I have enough ram to easily cache multiple files from the cache drive. Once the drives were different enough free space though it appears(greater than 10gb difference) it appears it will just write to one till it reaches the same free space again. I am going to do what @johnnie.black recommended and do fill-up instead but that explains why I never saw it in previous versions. It must of just happened in the past week sometime. Edit 2: Thank you @trurl @TexasUnraid @johnnie.black @Frank1940 for your assistance in figuring it out
  17. It does appear to have fixed it. It's very strange though. The files were each 10gb or so. I'd of expected it to take time to write it before switching to the other drive. Anyways thank you for your explanation and helping me.
  18. I'll set it to fill-up once it gets done rebooting and test. I guess it bases it off of free space and not % of free space? Also thank you for explaining..
  19. It's booted in safe mode with no dockers or plugins running. I specifically did this to make sure nothing was interfering with the tests
  20. I mean this can run for hours and still be the same speed. It isn't a refresh issue. There was even a thread specifically asking for this and johnnie.black stated exactly what i'm seeing. Slow write speeds.
  21. So.. your saying that the mover writing to two data drives at the exact same time as seen in the picture is considered normal? It drags the write speed down to a crawl doing this. I don't remember it doing it in previous versions either?
  22. I noticed when I start my mover process it literally writes to two data drives at once. The reason I stated that I was going to downgrade was to do more research but was just throwing it out there wondering if anyone had observed a similar thing happening to them and as stated I'd get a diagnostic before I did it. All I do is start the array, click mover, and it beings writing to two data disks at once. If I tell the mover to stop it will eventually. At the very end as the mover is stopping it will suddenly only write to one drive again. Running unbalance makes it only write to a single data drive as well. Diagnostics were taken during a safe mode run. All dockers halted as well. The only correlation between the two drives that i could make is they are both not as full as the others and both have encryption.
  23. I'm going to downgrade soon after I collect a diagnostic but I was wondering if any of you have experienced the mover attempting to write to two drives at the same time? I also noticed it's the only drives that are encrypted in my array so far.