tr0910

Members
  • Content Count

    1445
  • Joined

  • Last visited

Community Reputation

41 Good

About tr0910

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yep, and just as it passed 8% we had a power blink from a lightning storm and I intentionally did not have this plugged into the UPS. It failed gracefully, but restarted from zero. I have perfect drives that I will replace this with, but why not experience all of ZFS quirks since I have a chance. If the drive fails during resilvering I won't be surprised. If ZFS can manage resilvering without getting confused on this dingy harddrive, I will be impressed.
  2. @glennv @jortan I have installed a drive that is not perfect and started the resilvering (this drive has some questionable sectors). Might as well start with a worst possible case and see what happens if resilvering fails. (grin) I have docker and VM running from the degraded mirror while the resilvering is going on. Hopefully this doesn't confuse the resilvering. How many days should a resilvering take to complete on a 3tb drive? Its been running for over 24 hours now. zpool status pool: MFS2 state: DEGRADED status: One or more devices is currently being resi
  3. (bump) Has anyone done the zpool replace? What is the unRaid syntax for the replaced drive? Zpool status is reporting strange device names above.
  4. unRaid has a dedicated following, but there are some areas of general data integrity and security that unRaid hasn't developed as far as it has with Docker and VM support. I would like Open ZFS baked in at some point, and I have seen some interest from the developers, but they have to get around the Oracle legal bogeyman. I have seen no discussion around snapraid. Check out ZFS here.
  5. I need to do a zpool replace but what is the syntax for using with unRaid? I'm not sure how to reference the failed disk? I need to replace the failed disk without trashing the ZFS mirror. A 2 disk mirror has dropped one device. Unassigned devices does not even see the failing drive at all any more. I rebooted and swapped the slots for these 2 mirrored disks, and the same problem remains. The failure follows the missing disk. zpool status -x pool: MFS2 state: DEGRADED status: One or more devices could not be used because the label is missing or invalid.
  6. Pass through of iGPU is not often done, and not required for most Win10 use via RDP. Mine was not passed through.
  7. I don't have this combo, but a similar one. Windows 10 will load and run fine with the integrated graphics on mine. I'm using Windows RDP for most VM access. The only downside is that that video performance is nowhere near bare metal. Perfectly usable for Office applications, Internet browsers and totally fine for programming, but weak for anything where you need quick response from keyboard and mouse such as gaming. The upside is that RDP will run over the network so no separate cabling for video or mouse. For bare metal performance a dedicated video card for each VM is requi
  8. Every drive has a death sentence. But just like Mark Twain, "the rumors of my demise are greatly exaggerated". It's not so much the number of reallocated sectors that is worrying, but whether the drive is stable and is not adding more reallocated sectors on a regular basis. Use it with caution, (maybe run a second preclear to see what happens) and if it doesn't grow any more bad sectors, put it to work. I have had 10 yr old drives continue to perform flawlessly, and I have had them die sudden and violent deaths much younger. Keep your parity valid, and also backup i
  9. I've attempted to move the docker image to ZFS along with appdata. VM's are working. Docker refuses to start. Do I need to adjust the BTRFS image type? Correction, VM's are not working once the old cache drive is disconnected.
  10. ZFS was not responsible for the problem. I have a small cache drive and some of the files for docker and VM's still comes from there for startup. This drive didn't show up on boot. Powering down, and making sure this drive came up, resulted in VM and docker behaving normally. I need to get all appdata files moved to ZFS and off this drive as I am not using it for anything else.
  11. I have had one server on 6.9.2 since initial release and a pair of ZFS drives are serving Docker and VM's without issue. I just upgraded a production server 6.8.3 to 6.9.2 and now Docker refuses to start and VM's on the ZFS are not available. Zpool status looks fine pool: MFS2 state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that do
  12. If I understand you right, you are suggesting that I just monitor the error and don't worry about it. As long as it doesn't deteriorate it's no problem. Yes, this is one approach. However if these errors are spurious and not real, reseting them to zero is also ok. I take it there is no unRaid parity check equivalent for ZFS? (In my case, the disk with these problems is generating phantom errors. The parity check just confirms that there is no errors)
  13. I have a 2 disk ZFS being used for VM's on one server. These are older 3tb Seagates. And one showing 178 pending and 178 uncorrectable sectors. UnRaid parity check usually finds these errors are spurious and resets everthing to zero. Is there anything similar to do with ZFS?
  14. Thx, using ZFS for VM and Dockers now. Yes, its good. Only issue is updating ZFS when you update unRaid. Regarding unraid and enterprise, it seems that the user base is more the bluray and DVD hoarders. There are only a few of us that use unRaid outside of this niche. I'll be happy when ZFS is baked in.