reefcrazed

Members
  • Posts

    72
  • Joined

  • Last visited

Everything posted by reefcrazed

  1. Tagging along, I have this exact same issue that recently started. I have a separate thread that I started today.
  2. So this is a new issue that just popped up for me. My version of Unraid is 6.9.2, an array of 8 14tb drives with one as parity. Randomly when I write to unraid it takes forever, like creating a folder or copying a file. Sometimes write are immediate. None of the drives are set to power off, they always have full green balls next to the device. When this happens the website is fully responsive, nothing new in the logs. If I drop to a bash prompt and run htop the CPU's do not appear to go high usage, but in the web interface a core or two will go 100% which is very off. I have seen these pauses last 10 seconds and up to several minutes. Copying to individual disk shares shows the exact same long pauses. The only thing that may be odd with my setup is I run my system at very high drive usage, probably 98% full or so. Would the array being full cause this?
  3. I have the same setup as the OP, same MLB and SAS controller. Mine does the exact same thing with unraid.
  4. What are you running? Has anyone tried ESXi to see if it does register the correct temps now with this update?
  5. The chances are that it is fixed are like probably zero. I am finally ditching my board, 192gb of ECC, dual Xeons and going low power. The monitoring not working does not bother me that bad, but the amount of power used all day does.
  6. I would love to get excited about that, but so far nothing has fixed this. I am pinning it down to the hardware. Post back in a week please?I would love to get excited about that, but so far nothing has fixed this. I am pinning it down to the hardware. Post back in a week please? I would love to be able to keep this board even longer, although it is getting old. I upgraded my processors to the fastest the board will take and I bumped up the ram to 192 gig recently, so it would be nice to finally get this ridiculous problem fixed.
  7. Also, I tried just doing a net view \\servername and shows nothing. I mean the share is definitely there, if I use the IP address I can get to it. A simple ping or NS Lookup shows the DNS resolution is perfect.
  8. I think they all are either a mix of 2016 server or Windows 10.
  9. I have updated from the first version of Unraid, I really never needed to so I just let it ride all these years. So I updated last week, a billion great changes it seems. I have a domain but I do not want the the Unraid attached. I have the share set for workgroup and public. If I try to go to \\servername\shared I get an error. If I go to the \\192.168.0.30\shared I can get right to it. I am 100% positive my DNS is right. What would cause this?
  10. I have an old install, version 1.0.0 I think those drives are formatted in reiserfs. I tried to mount those drives in Debian last night and could not. I also tried some windows programs and nothing would mount them. I remember in the past being able to mount those drives in windows, but I cannot seem to be able to now. Is there any reason anyone can think of that this would not be possible?
  11. I will in a few days. I need to reboot it after some data is done copying, that will take a while.
  12. Brand new vanilla install, I could never see my shares. I decided to putty into the server and run samba, and boom I can see my shares now. On every reboot samba does not start. On other flavors of linux I am familiar with how to start a server on boot. On Unraid how can I start samba on boot?
  13. So basically you guys are seeing a bug that is an insane 5 years old now. The board has not changed price in all that time either, I paid $319.99 plus tax for it back in 2013.
  14. I have had this motherboard since 9-1-2013 and was one of the first ones to post this problem on the comments for Newegg about it, my post was the user James G. I spent a few months going around with AsRock about it and as always there response was that if it was the board then everyone would have the problem. What they fail to see is that not everyone monitors IPMI issues. I use mine on vSphere so I do monitor these events. So far the board has not died but I sure would had liked to have been able to fully monitor the health of my server. But yeah, mine works fine after a reboot but after an hour or too the CPU temp on both go completely nuts.
  15. Still rocking my Lian-Li PC-D8000, I just added more ram, I think it is sitting at 104gb which is a strange number to end up with.
  16. Here ya go, removed passwords I hope. log.zip
  17. Something is definitely wrong, I have been going a few months without a reboot. Recently my file system went read only and no drives appear bad. I rebooted and ran a full permissions set, all seemed okay then this morning another read only file system. Help!! I have a huge amount of data, totaling 35tb Mar 31 03:44:33 Hippocampus shfs/user0: shfs_write: write: (12) Cannot allocate memory Mar 31 03:44:33 Hippocampus kernel: REISERFS error (device md7): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 786432 is corrupted: first bit must be 1 Mar 31 03:44:33 Hippocampus kernel: REISERFS (device md7): Remounting filesystem read-only Mar 31 03:44:33 Hippocampus kernel: REISERFS warning (device md7): clm-6006 reiserfs_dirty_inode: writing inode 257 on readonly FS Mar 31 03:44:33 Hippocampus logger: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32)
  18. Or possibly an older Lian Li, they had several options in the past to support large amounts of drives. I guess I got my d8000 at the right time.
  19. I have one of the older Chenbro's, it reminds me of that one. For the life of me I could not figure out how to properly remove the front of it. I ended up accidentally breaking the front trying to remove it. It really pissed me off that there were no instructions and they did not make it easy. The case was rather expensive for what it was also, I think around $180. I hope they are more reasonable.
  20. I also want to mention the AOC cards were a major PITA where vSphere was concerned. I ordered these cards and flashed them to IT mode and have not looked back. http://www.amazon.com/gp/product/B0034DMSO6/ref=oh_details_o00_s00_i00?ie=UTF8&psc=1 I followed so many different guides, settings, etc on the Super Micro AOC cards and they just did too much crazy stuff to keep them, I ebayed them off this week. They work beautifully in different settings, but for VT-D they are not suitable. The M1015 card in IT mode works great, zero issues so far. I think I paid a little over $100 for each on Amazon.
  21. Somehow I did not get the email alert you replied to this, even though it has been a long time I will answer anyway. I used 5 way SATA splitters, with this many drives you just have to, plus the ones I used had very short splits and it just looked so nice. I am not familiar with the backplane you speak of. I can tell you now, the case has zero heat issues. It is nice and cool in there, it could go up 10 degrees and it still would not be hot. Over half the drives are spun down most of the day because of UNRaid not needed them. There is no issues with heat. I am using 3 fans on the rear, and three on one side of the hard drive cages, but not both sides. Heat is not an issue. Since there are three computers in this room I was more concerned with noise so I did not go crazy on the hard drives.
  22. Well I did find a solution, buying the IBM Serveraid controller off of Amazon and flashing it with the M1015 IT mode firmware. I have the second card coming later this week, but it appears just swapping one card fixed both.