ashman70

Members
  • Posts

    2630
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by ashman70

  1. Right now my Plex metadata resides on an SSD as an unassigned device, the SSD is 500GB I have a second empty 500GB SSD that I want to copy all the Plex data over to, so I can eventually have both SSD' in a RAID 1 cache pool for Plex. The SSD that currently contains the Plex metadate is formatted with the XFS file system, so I have to move everything off it to reformat it in BTRFS. What is the most efficient, in terms of speed, method of copying the Plex metadata from one drive to the next, there are thousands and thousands of files.
  2. Thanks, that makes sense. So far this morning after running for 12 hours with 6 left until completion, its reporting 226 uncorrectable. I do have a backup of my data. Attaching diagnostics. backuptower-diagnostics-20240328-0741.zip
  3. Running unRAD 6.12.8, my array is BTRFS. One of my drives is reporting corruption, I am currently running a scrub on the disk. If the scrub can't do anything in terms of repairing the corruption, if I do a parity replacement of the disk, what happens to the corrupted data? Does it remain corrupted, and if so, how can I repair it?
  4. Thank you that fixed it.
  5. Yesterday my backup server booted just fine, I may have updated some plugins, I can' remember. Today it won't boot and its complaining about the disk location plugin, it just hangs with an error referring to a config file. I'm not able to upload diagnostics, but I can boot the server in safe mode. What I can I do on the flash drive to get the server to boot? Can I delete the plugin from the flash drive or will that cause further problems? Running unRAID 6.12.6
  6. Thanks, I am aware and doing that right now, appreciate your help.
  7. Thanks, blkid was what I was looking for though.
  8. How do I identify the disks again? I did this once before but can't remember, there was a command I ran that listed all the disk ID's
  9. After upgrading to the latest release of unRAID I've noticed a lot of my drives are reporting corruption. I understand its possible with the new kernel in this release, its detecting corruption that may he been there for some time was just not detected by previous kernels. Should I run a scrub on each affected disk? I do have a complete backup of my data on another server running ZFS.
  10. My array is running BTRFS, I have discovered that one of my disks contains 31 corruption errors From what I gather I need to run the dmesg command to locate the file path of the corrupted files? I am having difficulty tracking down the syntax I need to use to do this. Has anyone done this before? I have everything backed up on a second system running ZFS, however this is my production server. TIA
  11. For some reason Radarr has reset itself and so I've lost all my settings and everything. I have an appdata backed up from August 28th that I am trying to use but having no luck I upgraded my server to 6.12.4 after this appdata backup was done so it was likely done with the deprecated appdata plugin, can I still use the backup to restore Radarr? The issue I am having is with the new appdata plugin is when it asks for backup source I am not sure where to point it? When I click on the question mark for this it says 'The folder which contains ab_xxx folders' I don't know where this is?
  12. Finally upgraded form 6.11.5 after having to rollback from upgrading to 6.12.3 and having issues with my 11th gen Intel system, this time everything went well, no issues, all good.
  13. So we would need some more details. When I do this, I remove the disk I want to replace from the pool in unRAID by stopping the array, removing the disk then restarting the array. ZFS reports the pool is degraded and a disk is missing. I then stop the array and physically replace the drive ( I don't have to power the server off as I have hot swap drive bays) once the new drive has been added to the pool to replace the old one, I start the array. At this point ZFS should detect a new drive to replace the missing one and automatically start the resilvering process. What do you mean when you say it shows disk 2,3 and 4 as wrong? Wrong how? Are you sure the resilvering process was completed before you replaced the last drive? Did you check the logs in unRAID for any info. Do you have a backup of your data?
  14. I have an Nvidia P2000 video card that has been used for Plex transcoding for the past 6 years or so. It was purchased used from eBay and never used for anything other than Plex transcoding where it performed without issue. Looking to sell locally for $200.00 CAN, willing to ship at buyers expense. Located in Milton, Ontario.
  15. How could I send a ZFS snapshot to another unRAID server? Assuming they are on the same LAN?
  16. Thanks I got it sorted out, but now I have one drive in pre fail state and one drive showing unallocated sectors. I'm crossing my fingers the parity check completes without errors as I only have one parity drive.
  17. I just had my 16 port HBA die, fortunately I have two HBA's on hand that can handle 8 drives each, however to use them I have to pull my P2000 which I have been using for Plex transcoding. How do I change the parameters of the Plex docker to utilize the iGPU in my 11th gen Intel CPU, if the Plex docker won't even start after I pull the P2000 and start the server? My bigger problem is that unRAID now things many of the disks are in the wrong slots. I think if I do a new config and reassign them as they are now and were prior to this issue, the array should start and I should be ok? Thoughts?
  18. When you ran the upgrade assistant what did it tell you? From your logs it appears you have nerd tools and several dynamix plugins installed, or did you remove these prior to upgrading?
  19. Yes you would have to move everything from the cache back to the array, reformat the cache and move it back, Was everything stable under 6.11.5? What are you doing on the server, dockers, vm's?
  20. Did you run the upgrade assistant before upgrading? What version did you upgrade from? Are you able to roll back to the version you upgraded from? I have not looked at your logs yet.
  21. Ok so if I make all the vdevs the same width how do I accomplish what I want to do? I want to have one share that utilizes the disk space on all three vdevs.
  22. Is it possible under the current implementation of unRAID to create one pool with two VDEV's? I have 22x8TB drives and 13x5TB drives, I want to create a vdev of each in RAIDZ2, then I want to add them to one pool. The goal is to have a shared that utilizes the space on the singe pool made up of both vdevs. Right not, at least through the gui, it appears I can only create two pools. one utilizing the 8TB drives and one utilizing the 5TB drives and this limits me to to a share using either the 8TB drives or 5TB drives, when I want a share to utilize both.