TommyJohn

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TommyJohn's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Hi Rolox, try changing the names of the files, tdarr should process them again.
  2. Hi guys. Been using unraid/handrake for a couple years now and successfully encoded TB of video, all of a sudden I'm getting the "Cannot read or write the directory error" I haven't changed permissions anywhere to my knowledge, output folder is set to : /mnt/user/Media/_Handbrake Transcodes/ and this is the folder i'm choosing to save to. I have tried different folders with the same error. permissions look right to me: ls -la "/mnt/user/Media/_Handbrake Transcodes" total 23851556 drwxrwx--- 1 nobody users 41 Mar 30 2021 ./ drwxrwxrwx 1 nobody users 23 Feb 2 2021 ../ handbrake docker settings and diagnostics attached. There's no encode log because encoding won't even start. Edit: I can encode from within windows handbrake to this directory no problem, its only in unraid that I'm having this issue. I've removed and installed fresh docker image.. nothing working for me. Hope someone can point out what will probably be an obvious mistake, thanks! diagnostics-20220226-1150.zip
  3. vmunich, did you ever get around to writing something for this? I'd be interested in doing the same. Edit: NM just found this on reddit: https://technicalramblings.com/blog/monitoring-your-ups-stats-and-cost-with-influxdb-and-grafana-on-unraid-2019-edition/
  4. Hi Guys, Restoring my data to an array and want to start Deluge without a connection so that no torrents start downloading when I start the docker, what would be the best way to do this? Would I just change a port setting to block traffic somewhere? Reason is because last time this happened when i installed the dockervpn and restored all my data from backup, deluge "forgot" the states of all the torrents and started downloading like mad, I want to be able to go back into each torrent and make sure the skip options are set correctly. TIA.
  5. Benson: the h200 was installed after the h700 was removed, they weren't simultaneously connected. johnnie: This is a good point, I honestly can't remember if I formatted the disks on the h700 before moving them to the h200, though almost certain it was after because I wanted to ensure I had the correct serial numbers on the new drives showing in unraid. So what do I do now? Do I attempt xfs_repair -L ? Will this potentialy repair the filesystem? UPDATE: Powered down for 24hrs and when powering back up this morning the array was back online... but all drives were empty. Looks like I've lost the data and have to recover from backup. Still worried that this might happen again, any indicators I should look for in logs? Any way to get a warning about the uuid issue in the future? Here is my new xfs_repair message: root@Tower:~# xfs_repair -v /dev/sdb1 Phase 1 - find and verify superblock... - block cache size set to 1513776 entries Phase 2 - using internal log - zero log... zero_log: head block 662654 tail block 662650 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Should I proceed with -L in attempt to recover data?
  6. So I finally took the plunge and replaced my perc h700 card with an h200 flashed to IT mode. I had an existing xfs array on the perc with 12tb of data. Fingers crossed when I rebooted the array wouldnt start and the drives all gave the error "Unmountable: No file system" I tried xfs_repair -v on the drives, but no luck. (errors corrected but still won't mount). I assumed changing cards that this was to be expected and I decided to start a new array with some new drives I had, and at the same time I figured by copying all my data to a new array at least I would have a non-fragemented array as my old one was probably 90% fragmented. Anyway 4 days later I finally finished copying everything and setting up all my dockers. Now I decided to add in 2 drives for parity. Precleared the new disks without errors, and rebooted only to have the exact same error as before, "Unmountable: No file system" ! Arghh. Side note- any advantage to setting up the parity while building the array as opposed to after? Would this have slowed down the copy process? So, again, xfs_repair -v the array drives, fixed some errors, ran again, no more errors. Rebooted, same problem, "Unmountable: No file system" What are my recommended next steps? Do I proceed with an xfs_repair -L even though I'm not getting any xfs_repair errors? I don't want to have to start from scratch again and copy 12tb of data, which is hard because its all sitting on shares spread across drives from my former array. But as a last resort I can do that. I'm hoping to salvage the existing data. Attached are my diagnostics, thanks in advance to those offering some guidance. EDIT: After the xfs_repairs on all disks and powering down overnight, the array came back online on next boot, but with data gone. tower-diagnostics-20190808-1546.zip
  7. Just saw Deluge has a major update to 2.0, can we expect to see a binhex version soon 😉?
  8. Got it, my bad I had a typo in the repository, was entering "unmaniac" with an i when there shouldn't be. Thanks for the help!
  9. Thanks Blindside.. I've never made a container before.. what do I put in for repository url?
  10. I don't see unmanic in the community applications, is it called something else?
  11. My CPU will not go above 30% during encodes, I have 24 cores and no cpu pinning set for docker containers. I am not using an ultrafast preset, I am trying both x265 and x264 encodes. x264 doesnt go above 20%, x265 doesnt go above 30%, Any ideas?
  12. Anybody have any torrent creation suggestions for unraid?
  13. Worst case I have all my data backed on other drives so I can start fresh if necessary, would just like to avoid it. I'm looking at the H200.. how would it support 12 drives like the H700? It seems to only support 8 drives, so does the SAS expander on the R510 backplane take care of the rest?
  14. Hi Jonathan, yes I did read about raid controllers prior to ordering my server and from what I understood the issues with certain RAID controllers (not specific to the H700) were not passing along SMART info and not reporting the device size correctly. My controller is currently passing both of those without issue However I did read that if the H700 fails then my only recourse is to get another H700 in its place. If I were to swap out the H700 for say an H200, would I have to re-format and re-transfer all my data or would the new card handle everything like nothing had changed?
  15. thanks trurl! I'll use this method and report back if I have any issues. Thanks for the tip regarding the cache drive, i thought it was only used was purely for that purpose.