ashman70

Members
  • Posts

    2620
  • Joined

  • Last visited

  • Days Won

    1

ashman70 last won the day on December 28 2018

ashman70 had the most liked content!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ashman70's Achievements

Proficient

Proficient (10/14)

73

Reputation

1

Community Answers

  1. My array is running BTRFS, I have discovered that one of my disks contains 31 corruption errors From what I gather I need to run the dmesg command to locate the file path of the corrupted files? I am having difficulty tracking down the syntax I need to use to do this. Has anyone done this before? I have everything backed up on a second system running ZFS, however this is my production server. TIA
  2. For some reason Radarr has reset itself and so I've lost all my settings and everything. I have an appdata backed up from August 28th that I am trying to use but having no luck I upgraded my server to 6.12.4 after this appdata backup was done so it was likely done with the deprecated appdata plugin, can I still use the backup to restore Radarr? The issue I am having is with the new appdata plugin is when it asks for backup source I am not sure where to point it? When I click on the question mark for this it says 'The folder which contains ab_xxx folders' I don't know where this is?
  3. Finally upgraded form 6.11.5 after having to rollback from upgrading to 6.12.3 and having issues with my 11th gen Intel system, this time everything went well, no issues, all good.
  4. So we would need some more details. When I do this, I remove the disk I want to replace from the pool in unRAID by stopping the array, removing the disk then restarting the array. ZFS reports the pool is degraded and a disk is missing. I then stop the array and physically replace the drive ( I don't have to power the server off as I have hot swap drive bays) once the new drive has been added to the pool to replace the old one, I start the array. At this point ZFS should detect a new drive to replace the missing one and automatically start the resilvering process. What do you mean when you say it shows disk 2,3 and 4 as wrong? Wrong how? Are you sure the resilvering process was completed before you replaced the last drive? Did you check the logs in unRAID for any info. Do you have a backup of your data?
  5. I have an Nvidia P2000 video card that has been used for Plex transcoding for the past 6 years or so. It was purchased used from eBay and never used for anything other than Plex transcoding where it performed without issue. Looking to sell locally for $200.00 CAN, willing to ship at buyers expense. Located in Milton, Ontario.
  6. How could I send a ZFS snapshot to another unRAID server? Assuming they are on the same LAN?
  7. Thanks I got it sorted out, but now I have one drive in pre fail state and one drive showing unallocated sectors. I'm crossing my fingers the parity check completes without errors as I only have one parity drive.
  8. I just had my 16 port HBA die, fortunately I have two HBA's on hand that can handle 8 drives each, however to use them I have to pull my P2000 which I have been using for Plex transcoding. How do I change the parameters of the Plex docker to utilize the iGPU in my 11th gen Intel CPU, if the Plex docker won't even start after I pull the P2000 and start the server? My bigger problem is that unRAID now things many of the disks are in the wrong slots. I think if I do a new config and reassign them as they are now and were prior to this issue, the array should start and I should be ok? Thoughts?
  9. When you ran the upgrade assistant what did it tell you? From your logs it appears you have nerd tools and several dynamix plugins installed, or did you remove these prior to upgrading?
  10. Yes you would have to move everything from the cache back to the array, reformat the cache and move it back, Was everything stable under 6.11.5? What are you doing on the server, dockers, vm's?
  11. Did you run the upgrade assistant before upgrading? What version did you upgrade from? Are you able to roll back to the version you upgraded from? I have not looked at your logs yet.
  12. Ok so if I make all the vdevs the same width how do I accomplish what I want to do? I want to have one share that utilizes the disk space on all three vdevs.
  13. Is it possible under the current implementation of unRAID to create one pool with two VDEV's? I have 22x8TB drives and 13x5TB drives, I want to create a vdev of each in RAIDZ2, then I want to add them to one pool. The goal is to have a shared that utilizes the space on the singe pool made up of both vdevs. Right not, at least through the gui, it appears I can only create two pools. one utilizing the 8TB drives and one utilizing the 5TB drives and this limits me to to a share using either the 8TB drives or 5TB drives, when I want a share to utilize both.