codefaux

Members
  • Content Count

    100
  • Joined

  • Last visited

  • Days Won

    1

codefaux last won the day on April 29

codefaux had the most liked content!

Community Reputation

9 Neutral

About codefaux

  • Rank
    Member

Converted

  • Gender
    Male
  • ICQ
    was shut down lol
  • AIM
    also sunlit
  • YIM
    doesn't even exist bro
  • MSN Messenger
    also dead. srsly?

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Looking at your kernel logs, neither syslog nor lspci output indicate that the Linux kernel can see both devices at the same time. The output of lspci.txt shows only one GPU or the other -- not both. This means the PCI subsystem itself cannot see both devices, which points to hardware. Even using vfio hardware passthrough, lspci should still show the hardware as present. This also means something changed aside from just installing the drivers, between those boots. Drivers cannot mask hardware from the kernel. Your problem seems to be motherboard device support, physical installatio
  2. I'm not familiar with using Wireguard or related with Unraid, or of its implications, but it sounds like you might be running afoul of something pretty common. To clarify, are you having problems accessing your server via domain name from inside the LAN, or outside the LAN? What level of experience do you have with DNS troubleshooting? It would also help to know what DDNS service you're using, both the company hosting it and the software/website you're using to configure IP addresses and so forth. Also, are you expecting inbound connections to use your VPN's tunnel, or
  3. I've had limited experience with non-fatal mce errors (ie, most of my mce errors have historically been crash-causing) but immediate suspects; - Overclocking (XMP is overclocking) - turn it off if you're overclocking. - BIOS/microcode updates - check your motherboard support website for updates and apply them carefully. - Not sure if related, but during loading/init of the Intel GPU modeules, your kernel explodes -- very likely to be worthwhile to disable this, if you can.
  4. Alrighty - first and foremost: Your data probably isn't gone. So long as you're careful, it probably won't be. Now, scanning your logs I don't see a direct cause - walk me through how this happened in more detail; When you say made a backup and restored the files, what exactly did you do? How did you handle the License change? Is your Array still starting, and does it still contain all of the same disks? Unrelated; Unraid can't see SMART data for your large drive, because it's behind what's technically a RAID controller. Poeple will be quick to point out tha
  5. OH! Also, if you're worried about Bit Rot or any other form of quiet corruption, there's a File Integrity plugin which handles data checksumming and verification, periodic check, etc. Obviously this comes with some performance impact, but most modern hardware is far more than able to handle it.
  6. Not quite. I broke down a lot of this information in another post, but it was more aimed at cache comprehension. The writeup will still help, but here's the TL;DR -- - Unraid stores data in regular single-partition-per-drive XFS filesystem format, by default. It combines them into a single "storage pool" using a sort of layered filesystem controller. It provides Parity to a jagged array by requiring the Parity disk to be the largest, and padding smaller disks in virtual space for parity calculations, IE pretending the disk is fully written with 0xFF or 0x00 any time it's accessed beyond i
  7. May 2 06:02:34 unraid apcupsd[6284]: Power failure. May 2 06:02:37 unraid apcupsd[6284]: Power is back. UPS running on mains. May 2 06:24:25 unraid apcupsd[6284]: Power failure. May 2 06:24:28 unraid apcupsd[6284]: Power is back. UPS running on mains. May 2 06:57:22 unraid apcupsd[6284]: Power failure. May 2 06:57:25 unraid apcupsd[6284]: Power is back. UPS running on mains. May 2 07:08:25 unraid apcupsd[6284]: Power failure. May 2 07:08:28 unraid apcupsd[6284]: Power is back. UPS running on mains. May 2 08:00:01 unraid root: Starting Mover May 2 08:00:01 unraid root: Forcing turbo w
  8. I assume you mean tunable poll_attributes, on Disk Settings? I've changed it, we'll see how that goes.
  9. Also, to clear it up - XMP is overclocking. I mentioned that, but maybe I wasn't quite clear enough. https://www.intel.com/content/www/us/en/gaming/extreme-memory-profile-xmp.html XMP is overclocking. There's no conditional. Full stop. If you want more colorful words, XMP is the Paint-By-Numbers version of Overclocking, which makes it less safe than actual overclocking, because with actual overclocking users increment by tiny margins until they start to destabilize and then back off in a careful, slow, calibrated approach. Actual overclocking takes time, effort, and pro
  10. Okay - one of those logs is pure junk, nothing but connection errors, closed, and timeout messages. Not even enough context to piece together why without more patience than I care to spend on it, haha. The other log indicates that your UPS is detecting power faults every few minutes for a broad margin of time. Unless it's a double-conversion UPS, switching between battery power and mains is not instant, and there's a minor delay where a power supply receives a brief drop in power, then a sudden sharp return of rough, stepped-square-wave (in most cases) faux-AC mains -- which is acc
  11. One of the problems with trace-inducing crashes is that they present in slightly different ways from a single cause, at times. It can be difficult to guarantee, but I agree that the issue should be related. Contextually, your swag proxy's breakage depends on where it's being accessed from. If your local network and/or other Docker containers are using it, they should be okay. The Unraid OS itself is the only device which should lose access to it. I don't mean to be insulting, it's just that Should does a lot of work with technology, so unfortunately it's hard not to nam
  12. I thnk it may be best to break this into one smaller question at a time, in single-themed threads, with a bit less backstory and a bit more question. It might also be easier for you to keep track of relevant information coming back from helpful posters, if it's split into separate threads. If you'd prefer not to have separate threads, we can certainly continue here still. I'd love to answer your questions in more detail, but with such a large single item to reply to, it can be rather difficult. I have difficulty keeping information straight sometimes, myself -- as evidenced by a ra
  13. Awesome, glad to hear you got things wrapped up. Unless I'm mistaken, that's the last of things to cover? If so, good luck!
  14. This is typical, and any time you reset a BIOS it is considered best practice to do a Load Defaults immediately afterward -- as is uaully indicated in the flashing instructions, and widely ignored. The reasoning is obscured to the average user, but important; when storing your values, historically, they don't save "Processor 1 clock multiplier was 24x" or whatever. It's a block of numbers, with no context. The next BIOS version might use that same data space for "CPU Northbridge Overvolt" and cause damage. Typically vendors have important things like that reset during the update pr
  15. Apologies for my untimely disappearance. As it stands, if things go well, you likely won't lose any data. What I'm trying to caution against is rash action, or inaction, and I apologize if I wasn't clear enough in that regard. My concerns were regarding outcomes with a missing disk, and both a parity and a data disk showing errors. With the missing disk returned, data loss chances are at a minimum. My suggestion here would be to replace one data disk at a time, then the parity disk, IF any of them continue to fail SMART tests. If you replace parity first wit