zetabax

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by zetabax

  1. Hi there, I need to purchase some used SAS drives for my MD 1200 and have found that 10Tib 12GB/s SAS drives are more readily available in the used market than 6/GB/s disks. My question: Since the MD1200 H200 is 6GB/s, if I plug 12GB/s SAS drives in, will the drives down scale to 6 or or will fail to recognize them? Thanks in advance
  2. Thanks @JorgeB! I will report back shortly
  3. Thank you for the tip. Problem solved: The Mikrotik server isn't compatible with UEFI so I rebuilt the VM using SeaBIOS and it worked. Thanks!
  4. Hey guys, My Unraid setup consists of 12 10TB SAS drives running BTRFS with 2 parity drives. Recently I started getting a consistent number of errors on my weekly parity check (Finding 17836 errors). I've manually ran the checker insuring 'Write corrections to parity' is ticked but at the end of each scan i get the same number of errors. According to SMART, all the disks appear to be in good health. I've attached the diagnosis file along with a couple of screen shots. As you'll see in the attached, I'm running a PowerEdge R730XD with a PERC controller flashed to LSI MegaRAID SAS-3 3108. Thanks in advance. quadra-diagnostics-20210501-2117.zip
  5. Thanks for your response. Yes however the instructions for Qemu/KVM are extremely vague. I also downloaded the ISO, created a new VM and booted the image with the same result. This isn't the first VM installed on my Unraid server, i have several Windows VMs that were a piece of cake to setup
  6. Thanks for the response @joecool169. Here are the boot options I have, none of which that work.
  7. Hi there, I'm trying to setup Mikrotik's The Dude monitoring server in KVM on Unraid. Instructions seemed pretty straight forward, download the raw disk image, set up a new Linux vm and point the disk to the folder and file that I moved into a subfolder in domains. The VM appears to setup find but no luck, I boot into the UEFI Interactive Shell (rookie move I guess) I would really appreciate it if someone could point me in the right direction please.
  8. Coincidentally I have a similar challenge. I recently purchased a bunch of used SAS drives, stood Unraid, migrated a bunch of data over and decided to look into disk health. All drives check out with SMART Health Status: OK however when I dig deeper I see Non-medium error count ranges from 0 all the way up into the millions. I'm going to keep an eye on the error rate to see if it increases but I'm wondering if the Non-Medium errors are from the previous owner? I recall reading somewhere that smart errors get permanently recorded to the drive and that threes no way to clear them. Would love to hear anyones thoughts. Thanks in advance
  9. Fair point. Unfortunately though, i'm only dealing with 12x 10 TB 7.2K SAS drives so white performance is bound to suck. I've quote quite a bit of data and based on my calculations, I'd be waiting a long ass time waiting for the transfer to complete. So I went ahead and put 2x 1TB WD Blue SSDs and set them as a write cache. The cache still fills up about only about once a day which I can live with. RSync will fail and I wait for the mover to flush the cache, then restart RSync. Seems to be working.
  10. Hi All, So i took the plunge and purchased a license and now i'm copying over my legally acquired collection of movies and music. Transfer speeds were brutally slow around the 250ish Mbps mark so I decided to set the share to write cache the data. Perfect, write speeds pop up to 1.1 Gbps woo hoo! Not so fast, despite having the mover tool set up to flush the cache hourly, my SSDs can't keep up with the network. Every hour or so my cache fills up and Rsync fails and kicks back a message similar to the following: rsync error: error in file IO (code 11) at receiver.c (378) [receiver=3/2/3] rsync: [sender] write error: Broken pipe (32) My question; aside from purchasing a larger SSD (current SSD is 480 GB) is there a way to throttle the connection so the mover has time to push the cash to the HDD or have rsync simply wait its turn to continue writing? ..... or am I talking EMC Netapp type enterprise features? Thanks in advance.
  11. Thanks all. I went ahead and renamed the network.cfg file and as hoped, and Unraid created a new one. I'm now able to access my shares files and am in the process of copying them over to my new unraid server.
  12. Just to reiterate - WebUI is accessable in safemode but stops responding when when in normal mode. Currently running trial key. Thanks in advance unraid1-diagnostics-20210411-1128.zip
  13. Correct. I'm beginning to wonder if I created a network config issue. Should i delete the network.cfg file and reboot?
  14. Thanks SimonF, I just PM'd the diagnostics file to you
  15. Thanks for getting back to me SimonF. Yes, I tried safemode - same result (web UI unavailable) Contents of the go file are as follows
  16. Ok, so heres the latest. I went back to basics and scanned the USB stick for errors which there were some. Ran the fix disk error wizard and voila, no more kernel panics. The new problem is the Web UI won't start and while I can ping and SSH the box, I can not access my data because (me in my infinite wisdom) never set the array to auto start. Screen shot of my mount status below. Would love some suggestions please!
  17. Thanks for the reply. Yes, same versions. This is proving to be extremely frustrating.... Now trying to manually recover the drives. What is Unraid's value if its this difficult to recover? I might as well build my own NAS with Ubuntu and Cockpit 🤷‍♂️
  18. Hi there, The other day I received an alert that my USB boot drive was near failure so I created a new backup and used the USB Creator App to fancy myself a new one as per the instructions. I then rebooted the server with the new USB stick and got the following kernel panic, "end kernel panic - not syncing: VFS: unable to mount root fs on unknown-block(0,0)" I tried rebooting with the same result, then went back to the original USB which resulted in the same kernel panic. I then created a new USB (different stick) based on a backup from 3 days ago with the same result. Because all i care about is the data, i created another brand new USB based on the instructions Files on the "Files on v6 boot drive - Unraid | Docs" only copying super.dat and disk.cfg but upon reboot the drives are there but it doesn't recognize btrfs configuration. I've tried using a medley of usb sticks and even recreated the USB using the manual method. Looking for suggestions please.
  19. No, not yet - I'm still debating but leaning towards waiting for the Linux version of TrueNAS (Scale) to be released this summer. I've been testing Unraid and while it offers some great featured, I still feel as though the platform is still beta. Adding ZFS to the mix makes it unsupported and therefore alpha IMO. I think the platform has tons of promise but the developers need to get with the times and support something other than USB boot. Right now, ZFS is a deal breaker and BTRFS just doesn't offer the rich features of ZFS. Also, key plugins like Krusader just aren't there yet. I can't justify paying a premium for something that doesn't meet the majority of my requirements. Will try again in a few years. Peace
  20. Hi there, I'm about to make the switch from TrueNAS to Unraid after spending several painful months fighting with poorly written free software. That said, the one thing I really liked about it was ZFS. I've been reading / watching various tutorials on setting ZFS up on Unraid and looks pretty straight forward to get up and running. One of the nice things about TrueNAS and ZFS is you could blow away your config, reinstall and all you needed to do was simply import the ZFS volume and you were back in business. My question.... can anyone out there who currently runs the ZFS plugin on Unraif provide some guidance on how difficult it is to manage / recover a ZFS volume in Unraid when something like a power failure abruptly takes out the Unraid configuration? I don't want to create a giant make work recovery project (in the event a piece of hardware fails) if sticking with native BTRFS is the safer path.
  21. Hi there, I'm about to make the switch from TrueNAS to Unraid after spending several painful months fighting with poorly written free software. That said, the one thing I really liked about it was ZFS. I've been reading / watching various tutorials on setting ZFS up on Unraid and looks pretty straight forward to get up and running. One of the nice things about TrueNAS and ZFS is you could blow away your config, reinstall and all you needed to do was simply import the ZFS volume and you were back in business. My question.... can anyone out there who currently runs the ZFS plugin on Unraif provide some guidance on how difficult it is to manage / recover a ZFS volume in Unraid when something like a power failure abruptly takes out the Unraid configuration? I don't want to create a giant make work recovery project (in the event a piece of hardware fails) if sticking with native BTRFS is the safer path.
  22. Yes, apparently UEFI is not supported on enterprise equipment. 🤷‍♂️
  23. Fair question; I have an enterprise class device sitting in my basement and I want to use the technology to its fullest. Booting from a USB stick is not industry best practices. Plus, i've been trying to get the PowerEdge to boot the Unraid USB for 2 days without any success. I've tried BIOS and UEFI and still I get errors like this is not a bootable disk, etc. Meanwhile I can boot the stick up from my PC. And yes, I've tried multiple USB sticks. And yes i'm able to boot other installers from USB (specifically ESXi, Windows Server, etc) I hate USB sticks!