joshbgosh10592

Members
  • Content Count

    50
  • Joined

  • Last visited

Community Reputation

0 Neutral

About joshbgosh10592

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Because it's RAID-0, wouldn't I be able to get the full size of both disks added, since there's nothing wasted to parity? I'm trying to see how to expand it, but all documentation refers to the pool as inside /dev/. I'm assuming it would be the first disk in the pool, /dev/sdi. Looking at the results of parted list, it only shows sdi1 and nothing about sdp, and the size is only 2TB, when the original size was 4TB. I'm wondering if the pool didn't accept the replacement?
  2. So, I did more research on it and I thought that would work, but after I ran it, the size didn't seem to change - it still shows the same as the before "df -h /mnt/disks/PVEData" Thoughts?
  3. I'm curious how btrfs knows the new disk is it's replacement, because that worked! Thank you! I'm assuming to expand the btrfs filesystem (original was 2x2TB, and I just swapped one out with a 3TB), I just use this, correct? Since the total usable should be 5TB. btrfs filesystem resize max /mnt/disks/PVEData Currently, root@NAS:/var/log# df -h /mnt/disks/PVEData/ Filesystem Size Used Avail Use% Mounted on /dev/sdi1 3.7T 1.7T 2.0T 47% /mnt/disks/PVEData
  4. No worries! So then via the unRAID webUI or btrfs commands, and how? I thought that when telling unassigned devices, you only tell one of the disks in the btrfs pool to mount and ignore the other, as it'll just mount with it? Just trying to get it straight so I don't lose anything.
  5. I'd have to do all of that to tell btrfs to use a different drive in one of my unassigned devices btrfs pools? I don't have a cache in that. I'm confused.
  6. I figured there would be a way to clone to a larger disk and just resize the pool afterwards? Regardless though, how would I tell the pool that instead of sdj, use sdp? Thank you so far!!
  7. I know this is a very old post, but I'm just now having an issue where I need to use ddrestore myself. While looking around, I found THIS PAGE that seems to be helping me. Hopefully this will help future Googlers.
  8. Thank you! I'm working on cleaning anything off that pool that I can, but how would I swap the failing with the new? The replacement drive is larger than the existing, but I'm assuming that's just a btrfs resize command.
  9. is there a way to flag the repair as something like "accept loss" when it comes across dsta it cant read? Im just figuring im missing something as its not really failing, but rather its run out if sectors to write to (as far as I understand), so theres probably a file or two that is corrupted and im willing to accept that.
  10. So, I ran this last night and it was progressing pretty well (about 10% an hour). However, it's still not finished and it's stuck on 94.5%, and in UD, it shows "command timedout" for sdj, and the "Current pending sector" count is climbing like crazy (yesterday it was 148, right now it's at 2171). Is there something special that I should have done instead of the normal replace because of the errors the drive was throwing?
  11. Thank you! As a note for future me/Googlers, I used btrfs replace start -f /dev/sdj1 /dev/sdp1 /mnt/disks/PVEData Where sdj is the failing drive and sdp is the replacement drive. I also learned that you can not have the trailing "/" or it'll error saying: ERROR: source device must be a block device or a devid
  12. Same issue here... Still on 6.8.3 and don't want to upgrade to an -rc branch as I don't have enough time to provide feedback on it. /etc/rc.d/rc.nginx restart temporarily solved it, but we'll see for how long.
  13. I have a RAID-0 btrfs array using unassigned devices that I configured via command line. I have now received two warnings about Current pending sector showing 62, and then again at 148, so I'm assuming it needs to be removed. I'm trying to follow steps here for replacement, but it seems to assume the drive has completely failed, and the steps are for a RAID1. I'm assuming it should be somewhat similar, but I don't want to lose the data. It's not important, but I'd rather not. The directions say to use something similar to btrfs replace start 7 /dev/sdf1 /mnt Obviously replaci
  14. There's gotta be something we can do, as now that my transcode cache is set to /tmp, the size hasn't changed at all, not even during even bigger batches. I just don't know enough to look at... Thank you though! Hopefully someone else will come across this thread and help us out.
  15. Yup! Name Container Writable Log --------------------------------------------------------------------- tdarr_aio 7.68 GB 395 kB 5.24 MB binhex-krusader 1.92 GB 35.1 MB 13.0 kB Shinobi 1.05 GB 93.2 MB 540 kB unmanic 567 MB 304 kB 5.72 kB HandBrake 424 MB 0 B 23.4 MB QDirStat 251 MB 0 B 23.4 MB OpenSpeedTest