axeman

Members
  • Posts

    538
  • Joined

  • Last visited

Everything posted by axeman

  1. Attached - this was before the extended SMART, as soon as I got the error. I'll be running an extended now. Sorry - Yes, correcting. Disk 12 Errors SDO -diagnostics-20220428-2238.zip
  2. So I had one disk show read errors during the monthly parity check. I'm guessing read errors are a bit different from write errors in that UnRaid doesn't disable the drive? What are my next steps here? Just move the files off this disk and replace the disk? It's the oldest one, going at 10 yrs, I think. So I have no problem getting rid of it. Diags below Unraid 6.9.2 I have a bunch of these in syslog: Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949051960 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949051968 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949051976 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949051984 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949051992 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949052000 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949052008 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949052016 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949052024 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949052032 Apr 28 22:32:51 Tower kernel: md: disk12 read error, sector=2949052040 About Disk 12: Apr 19 07:39:52 Rigel kernel: xfs filesystem being mounted at /mnt/disk12 supports timestamps until 2038 (0x7fffffff) Apr 19 07:39:52 Rigel emhttpd: shcmd (6511): xfs_growfs /mnt/disk12 Apr 19 07:39:52 Rigel root: meta-data=/dev/mapper/md12 isize=512 agcount=4, agsize=122094532 blks Apr 19 07:39:52 Rigel root: = sectsz=512 attr=2, projid32bit=1 Apr 19 07:39:52 Rigel root: = crc=1 finobt=1, sparse=0, rmapbt=0 Apr 19 07:39:52 Rigel root: = reflink=0 Apr 19 07:39:52 Rigel root: data = bsize=4096 blocks=488378126, imaxpct=5 Apr 19 07:39:52 Rigel root: = sunit=0 swidth=0 blks Apr 19 07:39:52 Rigel root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Apr 19 07:39:52 Rigel root: log =internal log bsize=4096 blocks=238465, version=2 Apr 19 07:39:52 Rigel root: = sectsz=512 sunit=0 blks, lazy-count=1 Apr 19 07:39:52 Rigel root: realtime =none extsz=4096 blocks=0, rtextents=0 WDC_WD20EARS-00MVWB0_WD-WCAZA5944835-20220428-2238 disk12 (sdo).txt
  3. Is it safe to delete the contents of rclone_cache ? I ended up with two of them (rclone_cache and rclone_cache_old) due to the way I initially setup my mounts. I'd like to delete the old one - but wasn't sure if it'll have some detrimental effect on my backend files. Thanks!
  4. I'm a newb - but have you checked to see if the mouncheck files got created properly?
  5. I'm no expert, so take this with a grain of salt, but sounds like it might be extracting directly to the rclone mount.
  6. I'm finally joining the cool kids club! Is there a reason to have the upload script do this? as opposed to just having a path like /tower/local/downloads ?
  7. I saw this - not sure if it's something we need to worry about... https://nakedsecurity.sophos.com/2022/01/14/serious-security-linux-full-disk-encryption-bug-fixed-patch-now/ on my server shows 2.3.4 the patched version is 2.4.3
  8. I still haven't gotten it and am afraid also. Still waiting. I don't remember what I set on mine for the Oauth...
  9. I'm in the same boat - but haven't received any email yet - wondering if waiting it out is best or if I should suck it up and do it.
  10. I wonder if the bandaid answer is to have a script that runs before the upload and move the metadata to the array. Probably not the best solution though. Would've been great if MergerFS can orchestrate which files go to which folder.
  11. I might have asked this before (can't seem to find it)... but are the exclusions on the upload script? Do you have something else to move them so that they are somewhere on the array - or do they basically stay in /local/ ? I have mergerFS joining my files from my Array: /mnt/user/Videos, Cache Drive: /mnt/user/local, and of course Rclone mount/cloud. Wondering if there's a way to force mergerFS to only copy certain files directly to /mnt/user/Videos.
  12. With ESXi 7, you can directly boot off the USB.
  13. I used MC and went to each drive and created a \local\ folder - and moved any movies from whatever path they were at to the \local\ folder. so that you are moving within the disk.
  14. why do this on UnRaid. My windows machine maps to the unraid shares for Emby, Sonarr etc.
  15. I do have it mapped on my windows machine as a drive letter. not sure if that's the same - but again - no issues.
  16. My Emby server is on a Windows machine and it access the mergerfs share like any other unraid share. Zero difference. \\tower\mergerfs\Videos etc ...
  17. Just tower/mergerfs. The only? downside, is that emby also creates the metadata there (I have it configured to save metadata to folders). So all those small files count toward the 400K teamdrive limit. If it gets too much, I can always just create a local metadata folder on the Emby Server - and let it store metadata there. But right now, it's not a huge problem.
  18. Okay - so I have the script setup somewhat as intended. Tower/local - this is where the stuff that will get uploaded goes. Tower/videos - all my other "non cloud" videos (kids movies. Need available even if the cloud is down due to ISP issue. Tower/rclone - this is where all my gdrive mounts are directly mounted. I don't touch this, except maybe to see what's local/cloud Tower/mergerfs - combines Tower/local, Tower/Videos and Tower/RClone So emby server library has paths presented as: Tower/mergerfs/Videos/TV or Tower/mergerfs/videos/kids
  19. That's just how I have it - because of circumstance, really. UnRaid and Emby (Sonnar too) were on different VMs for ages. I just added the scripts to UnRaid, and updated the existing instances to point to the mounts on UnRaid. I didn't have to do anything else. I also have non cloud shares that I still need UnRaid for - so to me, having all things storage related be on UnRaid server (local and cloud), and presentation and gathering on a separate machine is a good separation of concerns.
  20. I maybe missing the mark here - but let UnRaid run the scripts and share the data. Whereever Plex is, just point to the UnRaid shares. That's exactly how my current setup is. None of my stuff here runs in the UnRaid dockers. The only downside - is if the mount goes down, your library might get wonky. Typically, Sonarr will complain about it - Emby doesn't do anything other than stall.
  21. I am certainly no expert in this - but I believe you can accomplish this by running another instance of the script that points at your 4K collection - and set the option to NOT create merger FS mount for that script. MergerfsMountShare="ignore" at the top variables. Then on other script that you have the MergerFS, you update LocalFilesShare2 or whatever to include the path you create above. I have something similar with my TV shows. I have TV shows that are in-progress and TV shows that are completed separated out. The completed ones are on the cloud mount, the in-progress ones are local. The scenario is different because the libraries meant to show up separately. However, I'd imagine it'd work for your purpose as well.
  22. Bjur, if you already know what your passwords are, try creating a new remote in rclone config and see if you can see your unencrypted data.
  23. Same here. I even tried adjusting the sleep time to 120 seconds. Didn't seem to help.