KptnKMan

Members
  • Posts

    268
  • Joined

  • Last visited

Everything posted by KptnKMan

  1. Everything is working fine for me, and I also only have the same 2 monitors now. From what I can see, the other "jc42" sensor is not not detected, and I have all the monitors that I need. So between BIOS updates looks like they fixed it.
  2. This is NOT true, and you should not be telling people to make decisions like this based on your unfounded assumption that it is not in the manual. You should also NOT be making recommendations based upon assumption of something you saw in another motherboards manual, with a different chipset generation. Please don't do that. I am using all 8 SATA ports and both NVME slots, and all work fine together. If you are using THE EXACT SAME motherboard, it sounds like you have a fault and should probably contact ASUS support.
  3. Hi everyone, I have 2 unRAID servers (Details in Signature), both working pretty good, and I've been looking at upgrading my networking to 10Gb for some time. Trouble is, I've been all over the internet for months looking for details compatibility of running 10Gb adaptors in a PCIe 1x slots. I have an ASUS TUF GAMING X570-PLUS (WI-FI) motherboard, with a limited number of slots (Most of them 1x). It is a PCIe 4.0 motherboard so, in my case, my PCIe 4.0 x1 slot is capable of 1.97GB/s or 15.76Gb/s. What I'm trying to figure out is: 1. What is a recommended PCIe x4 10Gb NIC card I can use? 2. Can I plug an PCIe x4/8x 10Gb NIC into my PCIe 4.0 x1 slot and get 10Gb NIC speed? 3. Will my motherboard simply negotiate the PCIe 4.0 x1 slot at PCIe 2.0/3.0 speed and halve the bandwidth to 985MB/s or 7.88Gb/s? 4. Has anyone used a 10Gb x4 NIC in an PCIe 4.0 x1 slot, and what was your experience? I'm asking this because there is a lot of THEORETICAL talk, but I haven't found/seen any actual cases. Really hoping someone can help. Is anyone out there run/running a 10Gb 4x or 8x card in a 1x slot? What was your experience? Thanks. Update 2021-10-24: This thread turned into a story of my journey to 10Gb, and I've tried to include as much information as possible of my hardware choices, purchases, performance, troubleshooting, questions, etc. I hope at least someone may benefit from this information being available later. Feel free to chime in, comment or ask questions if you are on a similar journey or have relevant questions (Mostly surrounding 10Gb networking using Unraid).
  4. You guys might benefit from the HOWTO I wrote over here: Let me know if you found it useful. 👍
  5. @Maddeen Oooh, I would have waited just a couple weeks for the Ryzen 5000 chips, in particular the 5600X or 5800X for you. Anyhow, I upgraded from my (Now Backup) unraid to my new Ryzen build earlier this year. If you have your array setup now without any issues, and you're migrating to the same version, it should be easy to swap your disks out. You'll be able to find plenty of advice on this forum about that, from better people than me. You should basically be able to swap your hardware out, as far as I'm aware.
  6. @Maddeen The only gripe I might have with this motherboard is the lack of PCIe slots, but the chipset is PCIe 4.0, so that probably explains the dedicated lanes. I am waiting for a PCIe4.0 1x 10Gb NIC to come out. Right now I'm using a PCIe 1x Dual 1Gb Intel card, teamed with my onboard, for 3Gb throughput, and that works well. My backup Unraid also uses the same configuration, and I get a nice 3Gb/s between them. There are PCIe4.0 1x NICs with 4x 1Gb on them, but I'm not too concerned about them right now.
  7. @Maddeen I didn't have to do anything special to get the temp monitor to work, but now it is: I'm still on the 2407 BIOS, so it might be that version that got things working, but I'll need to update at some point as I intend to upgrade to a Ryzen 5000-series CPU in the coming months. @mscott I have no idea what "card" you're referring to. Can you explain?
  8. Hey @Maddeen I wrote a response to you but apparently it seemed to have not posted a week ago. The ASUS TUF GAMING X570-PLUS (WI-FI) is a great board for my Unraid. I've had no issues with it, apart from the thermal sensor issue in this thread. I currently run 64GB (2x 32GB) of Micron Multi-Bit ECC DDR4 at 3200MHz and it works great. I'm considering doubling that. The reason I got this board is because I wanted a board that was: - rock-solid stable - X570 chipset - 2x NVME M.2 x4 PCIe Gen 4 slots - ECC support (unofficial, but ASUS is good with including ECC support) - 8x onboard SATA - 2x PCIe x16 slots It delivered on all those accounts. I don't use the onboard WiFi but I wanted to possibly make a VM as a AP, and host that in Unraid. Also, it would seem that I got the thermal sensor to finally work on the 2407 BIOS, no idea what changed. So even that niggle seems to be working now.
  9. I've been having the same issues since starting this post also. At this point I'm running BIOS 2407. Are you running the latest BIOS 2607? BIOS Links.
  10. Thanks @JorgeB looks like the array started back correctly: I tried to browse for lost+found and couldnt see the dir, but I'll check with cli. For now looks like its working. Thanks so much for your help.
  11. Well I ran the check with -nv as recommended by documentation. Result before I start the array normally: Phase 1 - find and verify superblock... - block cache size set to 3043288 entries Phase 2 - using internal log - zero log... zero_log: head block 0 tail block 0 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 2 - agno = 1 - agno = 3 - agno = 4 - agno = 7 - agno = 5 - agno = 6 - agno = 0 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Oct 1 20:45:37 2020 Phase Start End Duration Phase 1: 10/01 20:44:38 10/01 20:44:40 2 seconds Phase 2: 10/01 20:44:40 10/01 20:44:40 Phase 3: 10/01 20:44:40 10/01 20:45:13 33 seconds Phase 4: 10/01 20:45:13 10/01 20:45:13 Phase 5: Skipped Phase 6: 10/01 20:45:13 10/01 20:45:37 24 seconds Phase 7: 10/01 20:45:37 10/01 20:45:37 Total run time: 59 seconds
  12. Check complete using -L Results: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... sb_fdblocks 278251709, counted 279719268 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 1567417 claims free block 195322 data fork in ino 1567417 claims free block 195323 data fork in ino 1567419 claims free block 250450 data fork in ino 1567419 claims free block 250451 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 data fork in ino 12884902030 claims free block 1610613854 data fork in ino 12884902030 claims free block 1610613855 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 2 - agno = 1 - agno = 5 - agno = 4 - agno = 7 - agno = 3 - agno = 6 - agno = 0 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (3:1137604) is ahead of log (1:2). Format log to cycle 6. done
  13. Ok thanks, running without anything produced this response. I'll try again with -L as advised and listed in response: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  14. Ran the test in Maintenance mode. Ran test with -nv options Results: Phase 1 - find and verify superblock... - block cache size set to 3043288 entries Phase 2 - using internal log - zero log... zero_log: head block 1126679 tail block 1126656 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 278251709, counted 279719268 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 1567417 claims free block 195322 data fork in ino 1567417 claims free block 195323 data fork in ino 1567419 claims free block 250450 data fork in ino 1567419 claims free block 250451 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 data fork in ino 12884902030 claims free block 1610613854 data fork in ino 12884902030 claims free block 1610613855 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 7 - agno = 6 - agno = 0 - agno = 1 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... Maximum metadata LSN (3:1137604) is ahead of log (3:1126679). Would format log to cycle 6. No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Oct 1 20:31:14 2020 Phase Start End Duration Phase 1: 10/01 20:30:17 10/01 20:30:18 1 second Phase 2: 10/01 20:30:18 10/01 20:30:18 Phase 3: 10/01 20:30:18 10/01 20:30:50 32 seconds Phase 4: 10/01 20:30:50 10/01 20:30:50 Phase 5: Skipped Phase 6: 10/01 20:30:50 10/01 20:31:14 24 seconds Phase 7: 10/01 20:31:14 10/01 20:31:14 Total run time: 57 seconds
  15. Yeah sorry, I forgot to attach. Got a new one and added to original post.
  16. Hi all, Something strange is happening with my array and I'm not sure if I should be very worried or what to do. Edit: I'm running 6.8.3 latest stable, no changes for some time now. So today I ran a monthly parity check on my 34TB array. A little over 9 18 hours in, I checked the status and saw that there were 450 errors listed. Checked the current log and looks like one of my disks (Disk 4 / ata7) was playing up and having an issue. I stopped the parity check, stopped the array and checked the log of the disk. It looked like the disk was having some kind of initialisation error, but I foolishly didn't take a screenshot or note. I brought the array back online, and saw that the reported usage was the same. Disk 4 still having issues. When accessing the array over LAN, I noticed many files missing, and that my VMs wouldn't start. Apparently there appeared to be many files and directories missing, despite the reported array size being correct. The VMs would not start because files like the GPU bios and virtio-win-0.1.173-2.iso image were missing. At this point I decided to completely shut down the system and leave it for a little bit, then start up clean. Now the array is mounted, Disk 4 is showing "Unmountable: No file system", with the option to format the disk available further down. The files missing were still missing, but after a short time seemed to have reappeared. I haven't verified everything. The array usage now seems to reports what looks like incorrect total usage: Any advice on what I should or can do? Thanks for any help. blaster-diagnostics-20201001-2003.zip
  17. There are a few in this thread that have the same array/shutdown issue, but you've got a point that it may not be enough to pull it immediately. There is another thread on this forum about the startup message, and I cannot fathom how many people might be experiencing "can't shutdown" issues. I know I searched through the forum enough times and trawled through logs to find my own conclusion. Still, I still think it's important to highlight this link, so that others might find it. I disagree that this doesn't warrant further discussion, it definitely does, but certainly less about the developer abandoning his project. I've been trying to investigate the codebase to find why the issues are encountered, already posted some of my own findings, and I would encourage anyone with deeper knowledge of unraid to chime in on this as maybe it can be fixed and submitted as a pull-request to the repo.
  18. I get what you're saying, but I'm not getting into a discussion about what "feels" right. I'm stating the facts. Yeah, I can see why you and others would leave this plugin be, but its still out there, released in the Community Apps selection. It's clearly been released and is in the Community Apps selection, which is the recommended way to find and install plugins and extensions, right alongside official releases of plugins and the like. It's MOST DEFINITELY RELEASED, which has nothing to do with its Alpha/Beta status. Sorry, but that is the truth. I know personally that I'm not RELYING any of my data on this. My gripe has clearly been that it makes unraid unstable (In my case, MY unraid server), and not just for me, which is unacceptable and I want others to know the danger this plugin poses. If it doesn't work or is buggy within its domain... fine, no worries... but if it puts into danger your entire server and data by destabilising the whole system, causing startup and shutdown errors (And things inbetween) then this SHOULD be at the least pulled or fixed. That is not "relying on this to secure their data", its much more dangerous. What makes it worse is that it silently a problem, so just by installing it, an unraid system or array could be rendered unusable. Anyway I'm no super-dev myself but I've developed things in the past, and I understand that if you release something then you should be expected to support it, which is why I release anything rarely and support anything that I "release" online through ANY channel. Nobody is perfect, but this is the marked difference between those who try to be good developers (Emphasis on "try"), and those that don't. Take the high road and judge my unpopular opinion if you like, but it is what it is and I'm not trying to be rude or sugar-coat anything.
  19. I understand what you're saying, but there is a marked difference between ALPHA and BETA software. This is the former, and I would argue it's PRE-ALPHA with the issues encountered by numerous people. I'm just saying it's inappropriately labelled, which is dangerous and playing with peoples data integrity. I know it's their own risk they take by installing it, but the BETA label is severely misleading and lacking. Absolutely, total agreement. I also stated that very point. However, for this plugin to move beyond its debatable BETA status, the dev needs to speak up and work with those willing to test. So far, I don't hear anything, and the GitHub account for this is quiet, to the point that the last dev commit was nearly 4 months ago. On the basis of that alone, it's hard to recommend this plugin for use at all.
  20. Agreed that a lot of people don't read up before posting. I'm not trying to be rude. I also tried to post my findings to help the dev, and others, and my followup was to hopefully get something out of the dev. So far silence. Personally, I was a little annoyed about how this server-breaking issue has not been even acknowledged by the plugin developer. I battled with this for some weeks and had to hard-reset my server a few times, which is NOT IDEAL for anyone (Especially with cache-pool BTRFS corruption issues, which I've also personally experienced). I get that this is a "free" plugin and all but I wouldn't recommend anyone use this plugin as its marked as BETA, but is really very very unstable and seemingly untested, especially for a crucial solution such as VM Backups. It's doing some weird stuff that I'm unsure about, and should be marked as pre-alpha/alpha. Lastly, I know that they don't wish to step on people's toes, but a basic (Not advanced) solution to this kind of thing (Config/VM/Docker backup) really needs to be rolled into unraid core.
  21. 1. I understood the BETA nature of this plugin before using it (Please refer to the 1st line in the 1st post of this thread). 2. I have already been using the userscript version, since I found the plugin to be unstable. 3. It is important to report these issues so that: - others can know the unstable issues before using it, SO THAT THEY DON'T LOSE DATA FROM AN UNSTABLE SYSTEM. - The developer can work with those who want to find and test a fix. @Stupifier Not to be rude, but your response helps nobody. I have submitted my findings and report of what I've seen so far in this thread, so that the dev can help the people encountering strange unrelated errors and system instability. If you have something constructive to add to the discussion, please do so.
  22. Something messed up is definitely going on. It would be great if the dev of this plugin would have ANYTHING to say please?
  23. I've been using this plugin to backup my VMs for a couple weeks now, but unfortunately I've found that this this is the cause of my server being unable to shutdown. My unRAID was unable to shutdown and would freeze forcing me to hard kill the system, causing a parity check every time. I do not want to do this for an otherwise stable system. Rolled back from 6.9.0-b1 to 6.8.3 didn't solve it. Running in safe mode showed that everything worked, but I couldn't start my VMs due to the Unassigned devices plugin. Uninstalling the VM Backup plugin solved the issue, and removed the error at startup. Something in the VM Backup plugin is breaking the Hypervisor and messing with my bonded network connection. From what I can tell, first there's the "error: failed to connect to the hypervisor" error: Failed to connect socket to '/varrun/libvirt/libvirt-sock' : No such file or directory" After this, I seemed to be getting some kind of trace error when shutting down: More shutdown: And finally stalls here forever (I've waited a day for this, and it didn't shutdown😞 Uninstalling the VM Backup Plugin fixed the issue, and I can now shutdown/reboot without stalling, crashing and parity check. The errors have gone also. It is a real shame because I use this plugin daily (Nightly). Does anyone know why this is happening? I'd like to use this plugin.