jasonbstanding

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by jasonbstanding

  1. (I'm using 6.12.6 currently and normally keep up with the latest) My box is a TS140 with a 1TB WD nvme assigned for cache and 24gb of RAM. I recently read in here about someone recommending configuring their Plex install to store its AppData on /mnt/cache/appdata/plex for speed. I considered moving my plex appdata folder onto the cache drive and noticed it already exists there. I then looked at the config for the appdata share and realised that it's set to Cache for primary storage (it was a while ago I set this up, and had forgotten...!). My understanding is that this means setting the Plex container to look at /mnt/user/appdata/plex means that in effect it's actually working with /mnt/cache/appdata/plex - would there be any performance benefit to point it directly at the cache drive instead?
  2. Ah! Never mind, I figured it out! I stopped the array, removed the drive from the array, started the array again, stopped the array, added it back - and when I restarted in Maintenance Mode the Sync option was available again. All looking good so far... (and a huge sigh of relief was heard)
  3. OK, I understand that. I've checked the connections and tried booting up again - I'm not sure what my next move should be though. Disk 3 is showing as unmountable. If I put the array into Maintenance mode and tick the Format Drive box I get a warning that says a format should never form part of a sync operation. If I stop the array and remove this drive it lists the drive in the "missing drives" list underneath. Is there a way to re-sync on to this drive again and hopefully see if the read errors on the other drive stop happening?
  4. Hiya, I recently replaced a 4TB drive before discovering the issue was that the power cable needed replacing, so I thought I'd put it back into my array to replace one of the 2TB drives in there. I erased the 4TB on another machine, and then plugged it into my array replacing the 2TB drive. With a stopped array I selected the new drive to go in place of the missing drive, then started the array in Maint mode and then clicked Sync. The sync too about 8 hours but appears to have completed successfully - I noted that one of the array drives reported nearly 100% errors in the reads during the sync, but I figured I'd let it finish. I stopped the array and then started it again and now the 4TB I put in is showing as "Unmountable: Unsupported or no filesystem". Should I have formatted the drive in the array before syncing? I think I'd assumed that the sync would overwrite whatever was on the drive before. Should I correct it by formatting the drive (somehow) and then trying the sync again? And, the massive number of read errors - could that be anything to do with a cable being disturbed when I was fitting the replacement drive? Many thanks! beehouse-diagnostics-20240207-2246.zip
  5. Looks like I've got a parity check scheduled in 15h time - that's not going to screw things up, is it?
  6. The same disk has done it again - diagnostics attached, but no reboot this time. Hopefully it has more info? In this case it appeared to be triggered by me deleting a directory out of /mnt/disk1/appdata and then attempting to restart a docker container - not sure if it's to do with syncing catching up? beehouse-diagnostics-20231105-1051.zip
  7. 6.12.4 install. For some reason one of my drives is reporting an error and appearing as disabled - when I run a read-check it comes back with no errors and I'm not really sure what to do next. Diagnostics attached! beehouse-diagnostics-20231012-2253.zip
  8. Hi there. This question looks fairly similar to one asked v. recently but I'm not savvy enough to know if it's the same... The other day one of my drives reported errors and is flagged as "disabled": I've run SMART short and extended tests on it which both returned without error, as well as the array Read Check - I'm not sure where to look to see more about these errors. In the syslog I can see a load of these: Jul 1 03:06:34 Beehouse kernel: blk_update_request: I/O error, dev sde, sector 1845904344 op 0x1:(WRITE) flags 0x800 phys_seg 94 prio class 0 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904280 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904288 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904296 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904304 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904312 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904320 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904328 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904336 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904344 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904352 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904360 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904368 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904376 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904384 Jul 1 03:06:34 Beehouse kernel: md: disk3 write error, sector=1845904392 Diagnostics attached. The drive in question is mainly just used as my TimeMachine backup. The box doesn't have anything non-standard in it (controller cards, etc) - it's just a Lenovo TS140 using the mainboard connectors, that's been ticking away nicely for years. Any ideas what I should try next? diagnostics-20220707-1631.zip
  9. That seems to have cheered it up - thanks a million! My new parity drive appears to be off happily building, and I can access my data again!
  10. Hi there, I've been really excited playing about with my new unraid setup these past few days since moving to it over Christmas, and I just took delivery of a couple of new drives after discovering how the unraid parity drive setup works. So, I just powered down by box, installed the new drives, then spun everything up again, and very worryingly for me the "main" 4TB drive that's got the bulk of my data on it has shown as unmountable. It showed as unmountable, so I put the array into maintenance mode and ran a check on it - which returned all OK and told me life was fine. So then I started the array again, and it showed as unmountable. I repeated the process, but now the check's returning: `Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error` I noticed the drive assignment go from sdc before the installation to sdg afterwards - but from what I can tell in this forum that's a red herring. My thoughts were: 1) power down & check all the cables are seated properly in case I knocked something 2) try unplugging the new drives & see if it sorts itself out But I wondered if there's anything else I can do? Predictably, this would happen to my main data drive on the exact same day the new drive arrives that I was planning to use as a parity drive. Many thanks!