• Posts

  • Joined

  • Last visited

Everything posted by juchong

  1. I posted my setup a while ago, but it's seen quite a few updates since then. I consolidated the backup drives into the primary system, so it's now sitting at 378TB usable on the main array. The "unpacking" cache array is sitting at 5.1TB (U.2 NVMEs) usable space, and "containers" array is sitting at 1TB (M.2 NVMEs). All the drives live inside a 36-bay, 4U Supermicro chassis. The server is currently colocated inside a local datacenter. 😁
  2. It's been a while since I played with this since I ended up selling my fusionIO drive in favor of U.2 NVMEs, but ich777 is correct. You can absolutely use his Docker container to build the kernel modules as a plugin, but it'll be a massive pain to keep up with releases. Here's my repository with the build script I was using for testing. YMMV: https://github.com/juchong/unraid-fusionio-build
  3. juchong

    Cloud Backup

    I wanted to thank the team for how seamless they've made the backup and restore process! My USB drive failed last night, but luckily, the latest configuration was stored safely in the cloud. Restoring the configuration was extremely easy and got the server up-and-running in just a few minutes. A+ work, everyone!
  4. Your post is what tipped me off to there being a DOS-based utility available. I was already contemplating manually programming the ROM using an external programmer.
  5. I recently transplanted my E3C246D4U into a new case and accidentally got the UID button on the back of the board stuck. Once I powered the server on, I ran into the dreaded "BMC Self Test Status Failed" error. After a bit of digging, I found a utility posted by ASRock that allows the BMC to be flashed in DOS. This utility, along with the most up-to-date BMC firmware downloaded from their website, got me up-and-running! Here's a link to the ASRock website. I've also attached the utility to this post in case it disappears from the internet. socflash v1.20.00.zip
  6. I'm having a similar issue as well... What's going on?!
  7. Hey! I'm seeing a similar issue on my server as well. Here's the relevant syslog: Feb 17 19:46:21 thecloud root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 17 19:46:25 thecloud root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 17 19:46:29 thecloud root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 17 19:46:33 thecloud root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 17 19:46:37 thecloud root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 17 19:46:41 thecloud root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 17 19:46:45 thecloud root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 17 21:09:50 thecloud vsftpd[24107]: connect from ( Feb 17 21:09:50 thecloud in.telnetd[24108]: connect from ( Feb 17 21:09:50 thecloud sshd[24109]: Connection from port 36967 on port 22 rdomain "" Feb 17 21:09:50 thecloud sshd[24109]: error: kex_exchange_identification: Connection closed by remote host Feb 17 21:09:50 thecloud sshd[24109]: Connection closed by port 36967 Feb 17 21:09:50 thecloud vsftpd[24113]: connect from ( Feb 17 21:09:55 thecloud telnetd[24108]: ttloop: peer died: EOF Feb 17 21:10:01 thecloud smbd[24110]: [2021/02/17 21:10:01.582271, 0] ../../source3/smbd/process.c:341(read_packet_remainder) Feb 17 21:10:01 thecloud smbd[24110]: read_fd_with_timeout failed for client read error = NT_STATUS_END_OF_FILE. Feb 17 21:53:56 thecloud kernel: Linux version 5.10.1-Unraid (root@Develop) (gcc (GCC) 9.3.0, GNU ld version 2.33.1-slack15) #1 SMP Thu Dec 17 11:41:39 PST 2020 Feb 17 21:53:56 thecloud kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot console=ttyS0,115200 console=tty0 Feb 17 21:53:56 thecloud kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 17 21:53:56 thecloud kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 17 21:53:56 thecloud kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 17 21:53:56 thecloud kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 17 21:53:56 thecloud kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 17 21:53:56 thecloud kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 17 21:53:56 thecloud kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 17 21:53:56 thecloud kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 17 21:53:56 thecloud kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 17 21:53:56 thecloud kernel: BIOS-provided physical RAM map: Feb 17 21:53:56 thecloud kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 17 21:53:56 thecloud kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 17 21:53:56 thecloud kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 17 21:53:56 thecloud kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 17 21:53:56 thecloud kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved @gdeyoung do you happen to have a Ubiquiti device acting as your gateway/router? The reason I ask is because my USG Pro was actively trying to log into the server via ssh/telnet when it hung.
  8. Hi folks, I appreciate the recommendation (and the efforts on getting the updated BIOS from ASRock)! I swapped my old setup over to the new board, processor, and RAM without any issues.
  9. IPMI is high on my list of "wants" for sure. I assumed that the M.2 slot was shared with Slot #5, but now I'm not sure after reading through the product page. My current setup uses an i9 9900k and a C9Z390-PGW, so it's not "bad", but I really miss IPMI, accurate temperature readings, and ECC.
  10. Hi everyone! I know folks have been talking about the E3C246D4U (mini-ITX), but does anyone know if there's a similarly well-supported ATX board that you'd recommend? I have a U.2 drive that I use as my main cache drive along with an M.2, Intel fiber card, and the JBOD controller, so a mini-ITX board won't really cut it.
  11. Hi folks, I ended up inadvertently running into an issue with the fan speed plugin over the weekend that I thought might be important to report. It's worth pointing out that the fan plugin worked perfectly for several months. My motherboard is a Supermicro C9Z390-PGW, the driver loaded is coretemp nct6775. I've been putting off a BIOS update for several months, so I finally took the time to shut the server down and take care of it. After booting everything back up, I discovered that the UnRAID GUI would stop working (crash) a few seconds after the boot process finished. Docker, VMs, network shares, etc., all continued working, and only the GUI would crash. Digging through the logs, I traced the problem back to the auto fan plugin. It seems that, for some reason, the plugin was causing the GUI to crash. I managed to remove the plugin using the console, and after doing so, the GUI started up again. I suspect that the BIOS update somehow changed the path to a hardware resource that the plugin didn't correctly handle. I'm not sure what the scripts that make up the plugin look like, but it might be worth adding a check at start-up to handle any exceptions that might be thrown if allocated resources are missing. The plugin worked as expected after re-installation. I hope this helps!
  12. Here's my contribution to the storage party. The main server has 224TB usable, the backup server has 126TB usable. The main server drives are housed in a Chenbro NR40700 case and the backup server is housed in a Supermicro CSE-826. I'll need to upgrade the backup server at some point since every drive bay is full.
  13. I can definitely understand your reaction if the norm on the forums is for folks to "demand the source" and then walk away from the discussion. In contrast, I'm actually serious about trying to understand how the driver enumerates devices. Here's my fork of the driver with modifications specifically intended to allow compilation using the Unraid Kernel Docker Container: https://github.com/juchong/iomemory-vsl4 Edit: Here's the build script for anyone that wants to try it out themselves: https://github.com/juchong/unraid-fusionio-build
  14. This is disappointing. Getting back to the thread topic, can anyone help me identify which section of the code performs the device enumeration? It would help speed up the process significantly.
  15. Hey now, no need to be passive-agressive. I'm trying to help the community. Both of my interactions with moderators in this thread have been met with unwarranted aggression thus far. Reading through the forums, there seems to be a similar pattern in lots of threads where people are asking genuine questions.
  16. Thanks. Would it be possible to point me to the md source code (if available publicly)? I've put together a reliable method of building the driver on 5.x kernels already.
  17. Thanks, but your reply was not helpful. I'm trying to add support for the driver and have been largely successful. The last piece in the puzzle is getting the md driver to detect the drive, hence my question.
  18. I'm trying to understand what UnRAID expects to read from a drive before it will be enumerated by the md driver. I modified the FusionIO driver to enumerate as /dev/hda temporarily and still couldn't get the driver to detect the card. I suspect that the md driver expects the drive to have a serial number (the FusionIO driver does not provide one by default), but I could be wrong. Does anyone have more insight into this?
  19. Update: After a few minutes of sitting on the main page, the CPU meter will start to display data again. Is there a built-in proxy that maybe isn't getting configured properly?
  20. Hi everyone, I'm running Unraid 6.9.0 beta 30 and recently ran into issues with the VNC viewer and CPU meter both refusing to work after I changed the default http port to 8081. I made this change to allow the SWAG to handle requests on port 80. Has anyone ran into issues like this? Is there a setting I can change to get these features back? Thanks!
  21. I've been able to successfully compile and install the latest version of the driver on beta30 using your docker container! I've even gone as far as to add the necessary variables so that the code fits nicely within your current script. I've also been able to move away from using .txz files in favor of dkms. This ended up simplifying the compilation process quite a bit and should hopefully make it somewhat future-proof. I still have to figure out the custom udev rules and perform more testing, but otherwise things look good on the 5.8.13 kernel! /dev/fioa1: Timing cached reads: 25754 MB in 1.99 seconds = 12950.66 MB/sec Timing buffered disk reads: 6038 MB in 3.00 seconds = 2012.51 MB/sec
  22. Hi folks! I've been able to get the driver to compile on 4.x kernels (Unraid 6.8.3) cleanly using the Unraid-Kernel-Helper docker. I'm working out how to get unRAID to recognize the drives now. Stay tuned!
  23. You'd be surprised! There seems to be a lot of interest from folks looking to add these to their servers since the prices have dropped dramatically in the past few years (link). I personally got my hands on a 6.4TB card for ~$400, so I've got a lot of incentive to get this working! It's just a kernel module, but I have to pull a few .txz files needed for the compile. Not a big deal. I'm running tests now, but everything seems to be happy! Do you have a GitHub repo I can make a pull request in?
  24. Thanks! I'm moving over my hacked-together build process into your build script for testing. If this works out, would you consider adding this feature to the Docker?
  25. Thanks, everyone! I'll know a bit more once I do some testing, but I think I might have a reasonable method of enabling the use of the iomemory modules! I personally have a 6.4TB version ready to be used as a cache drive.