jagame

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by jagame

  1. I have the same issue. Why hasn't anyone from Unraid responded to this support request?
  2. SATA SSD is dead. It was old and served it's purpose well. It will be missed but we'll carry on and cherish the memory Thank you both for the assistance. I would have fouled this up without you.
  3. Thank you, that took care of my array disk issue. I reseated the cables on the cache disk to no avail. I bought the cables a couple of years ago because I was having issues and I found what I thought were good cables and they seemed fine. But I guess not. Can anyone recommend good SATA cables with locking ends? Update - I forgot to mention that I did have to run it using -L. It errored indicating there was log data and wanted me to mount the drive to clear it but the drive is unmountable so -L seemed like my only option. I read a couple of other forum posts related to that as well.
  4. Here is the result of the test using the -nv options. I don't see where it states success of failure as the directions indicate: Phase 1 - find and verify superblock... - block cache size set to 1460704 entries Phase 2 - using internal log - zero log... zero_log: head block 96845 tail block 96716 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... block (0,38063048-38063048) multiply claimed by cnt space tree, state - 2 block (0,67188081-67188081) multiply claimed by cnt space tree, state - 2 agf_freeblks 1889169, counted 1889167 in ag 0 sb_fdblocks 474767990, counted 474767988 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 57523006 claims free block 7190378 data fork in ino 304504010 claims free block 38063046 data fork in ino 537545451 claims free block 67188079 - agno = 1 data fork in ino 2957327356 claims free block 369667381 data fork in ino 3185871218 claims free block 398209908 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 data fork in ino 17682402172 claims free block 2210304304 - agno = 9 data fork in ino 19330604584 claims free block 2416325587 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (0,1107568-1107570) only seen by one free space btree free space (0,7190380-7190381) only seen by one free space btree free space (0,38063051-38063054) only seen by one free space btree free space (0,67188022-67188024) only seen by one free space btree free space (1,101231928-101231930) only seen by one free space btree free space (1,129769772-129769773) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 6 - agno = 7 - agno = 2 - agno = 5 - agno = 8 - agno = 9 - agno = 3 - agno = 4 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Dec 8 15:16:57 2022 Phase Start End Duration Phase 1: 12/08 15:15:54 12/08 15:15:55 1 second Phase 2: 12/08 15:15:55 12/08 15:16:00 5 seconds Phase 3: 12/08 15:16:00 12/08 15:16:41 41 seconds Phase 4: 12/08 15:16:41 12/08 15:16:42 1 second Phase 5: Skipped Phase 6: 12/08 15:16:42 12/08 15:16:57 15 seconds Phase 7: 12/08 15:16:57 12/08 15:16:57 Total run time: 1 minute, 3 seconds That will probably be more readable in an image
  5. Thank you. I'll do that now and report back.
  6. A few days ago I got a new drive for my UNRAID server. After installing the new drive and powering on, I got an error on my parity drive. After a couple of days rebuilding my parity on my new drive, I went ahead and formatted and added the other drive back to the array. All looked good and things appeared to be back in order last night. I wake up this morning to find that Disk2 is now reporting "Unmountable: wrong or no file system". And I'm missing my SATA SSD cache drive (I have two cache drives, one is NVME and the other is a SATA SSD giving me a total of 6 disks to fill my license). I do see the cache drive listed under Historical Devices. What's more odd is that my new parity drive is also listed under Historical Devices and also in my array as the Parity drive. I'm really not sure what's going on now. I've pulled diagnostics and attaching them. If someone has a little free time, I would greatly appreciate some assistance with this. I screwed up the last time I tried to troubleshoot this on my own and lost all my data. unraid-diagnostics-20221208-1426.zip
  7. Thanks for the feedback guys/gals/others (pick your identifier of choice). I'm an Emby troll for the moment and really don't care to run Plex for just podcasts. I just installed the AirSonic app/container and it seems to be working. Thanks!
  8. I'm having issues getting podcasts working. The audiobooks work great but I was hoping to also use it to download and stream some podcasts. Is there something special that has to be done to get that working? Examples of podcasts I have added but aren't working: http://feeds.feedburner.com/TheHistoryOfRome https://revolutionspodcast.libsyn.com/rss http://podcasts.joerogan.net/feed
  9. I have the same issue but I never formatted the drive. I had an old 8TB disk that took 51 hours to finish all 10 steps of the preclear. I then added the disk to my array and it started clearing again. It's been over 2 hours and I'm only at 20%. Is there a detailed explanation of the preclear process as well as what is evaluated when a disk is added because this seems pretty ridiculous. I thought I understood preclear was writing all zeros to the disk and wiping all partitions but if so, then why is it clearing it again? Is there a log file with the details of what it found and the actions unraid is taking?
  10. I just saw this in my log as well and would like to know the resolution.