linenoise

Members
  • Posts

    42
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

linenoise's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. I found a container in the Appstore that looks very straight forward but there is no link to a support forum so I figured I will piggy back on this one. Looking at the installation docker settings in unraid I tried to move the backup directory to its own share /mnt/user/backup local/ but when I perform a backup it still saves the file in /mount in the root directory. I am not sure where the backups are physically stored, my guess is the cache drive but I wasn't able to find it using krusader when navigate to the drive directly. From the information on Nextcloud AIO GitHub page it says the you can backup to multiple locations but I have no idea on how to do this. My plan was to save 1 copy on the array then use rclone to back it up to a external usb drive and an offsite location. Second question I have is that I am using NGNx proxy manager for letsencrypt certs and forwarding port http to port 11000 which seems to be working fine but according to the Nextcloud AIO you need to forward port 80 to the Nextcloud to renew certificates. Since Unraid is using port 80 for the UI will there be any impact to the functioning of nextcloud AIO? Thrid question, if I am using NGINX proxy manager how do I forward ports 3478/TCP and 3478/UDP for the Talk container? I only see http (80) and https 9 (443) as options in nginx proxy manager. Do I have to forward those ports directly from my router? I would prefer to use a reverse proxy if this is possible. Thanks
  2. I am not sure its because this has not been updated in a while but it no longer shows on the appstore. I was able to pull the image and have community applications build the template for me and it seems to work fine. Can you place a more official copy of this docker back? Also the icon is also broken at the top of this post which I planned to link to for the docker that was created.
  3. Has anyone got this to work? I am getting the following error. I followed the gethub instructions to get the cookies and create the cookies.txt file. What I am not clear on is how to enter the YTDL_OPTIONS in the unraid form. The instructions on the gethub page show for a docker compose file but I am unsure how to enter this in the unraid format. A screen shot of a working config like the one above would be great. It would also be helpful to see an example of multiple YTDL_OPTIONS such as subs, audio, etc. Have seen some example strings in this thread but am not sure how to enter it in the Key:, Value: format, unraid uses. Thanks
  4. OK I went ahead and deleted the log file. This worked, I am assuming that the impact deleting the metadata file had was the files that it was confused about placed in a directory called "lost and found", but preliminary scan shows that all the files are there. Thanks for the quick and helpful response.
  5. OK I ran the disk check using the checkmark. Got this result. I tried to mount the files system and got this error Is there any other way to run the log file, or do I have to wipe it and move on? If I do how much damge would this do, would it end up deleting files?
  6. I hope this is the right place to post this. It looks like I have a corrupted XFS system that I need to repair. I have been running this setup without any issues for over a year. The catch is that the file system is located on an iSCSI attached file share accross 10Gig optic, that is mounted using unassigned devices. Here is a high level diagram of the setup. Unassigned device as it shows up in main menu (hope you can see it) I cant mount the drive under unassigned devices. This is the error I get when I mount the array with unassigned devices for the iSCSI drive. Then it reverts back to unmounted The drive shows up as dev\sdw so I THINK? the iSCSI interface is working correctly, but this could be cashed from when it was working. I am not too experienced with iSCSI. I didn't see a way from the GUI to do an xfs_repair, so I tried from the command line. This is what I get, the result is that xfs_repair tries to verify superblock and I get "............" continuously until I ctl^c to break out. Before you ask why I am doing this. 1. I like the docker for nextcloud and the docker system in general in unraid better then freeNAS jails and plugins. 2. the idea was to use the docker in unraid then take advantage of the ZFS files system in freeNAS for better error protection and speed. Not sure how the speed aspect turned out since freeNAS array is ZFS but when it was connected to unraid the share was formatted XFS, I think that's how it works. To be honest it was hacked this together using several different tutorials and trial and error.. Anyway I didn't do any speed tests but it didn't feel like it was an improvement in speed over unraid. Yes I am using a 10gb interface between freenas<->Unraid<->Client computer. I am not sure if it is even using ZFS since it is mounted using XFS. 3. Once I get access to the data, I plan to just run nextcloud on freeNAS since this seems to be more headache then its worth.
  7. I tried the command hdparm -w 1 /dev/sdm but it didn't work, there is a slight typo, you have to remove the space between the W and 1 . The command should be. hdparm -W1 /dev/sdm I found a great resource for hdparm commands here with detailed descriptions and when its best to turn disk cash on and off. https://www.linux-magazine.com/Online/Features/Tune-Your-Hard-Disk-with-hdparm
  8. Yes drive 14 was offline when I formatted it trying to fix the xfs_corruption. Thanks @JorgeB this was extremely helpful might want to pin it for future reference I didn't see the dm-n mentioned in the other xfs_repair thread on here or @SpaceInvaderOne excellent youtube video on xfs repair. For completeness I ran xfl_repair from the GUI and below are the results, this appears to have fixed the problem. Phase 1 - find and verify superblock... - block cache size set to 6175816 entries Phase 2 - using internal log - zero log... zero_log: head block 194465 tail block 194465 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 bad CRC for inode 540099641 bad CRC for inode 540099641, will rewrite Bad atime nsec 2223570943 on inode 540099641, resetting to zero cleared inode 540099641 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 clearing reflink flag on inode 671443 clearing reflink flag on inode 1611822151 clearing reflink flag on inode 671446 clearing reflink flag on inode 671454 clearing reflink flag on inode 537782798 clearing reflink flag on inode 537782803 clearing reflink flag on inode 1075407107 clearing reflink flag on inode 1611822153 clearing reflink flag on inode 1611822597 clearing reflink flag on inode 1075407108 clearing reflink flag on inode 1075407111 clearing reflink flag on inode 537782807 clearing reflink flag on inode 1611822609 clearing reflink flag on inode 537782808 clearing reflink flag on inode 1611822610 clearing reflink flag on inode 672347 clearing reflink flag on inode 1075407115 clearing reflink flag on inode 537782809 clearing reflink flag on inode 1611822611 clearing reflink flag on inode 672365 clearing reflink flag on inode 1611822643 clearing reflink flag on inode 1611953539 clearing reflink flag on inode 1611953540 clearing reflink flag on inode 1611953543 clearing reflink flag on inode 1611953544 clearing reflink flag on inode 1079722387 clearing reflink flag on inode 1079722388 clearing reflink flag on inode 1079723377 clearing reflink flag on inode 1079723380 clearing reflink flag on inode 598095808 clearing reflink flag on inode 1080759596 clearing reflink flag on inode 1080759597 clearing reflink flag on inode 598095809 clearing reflink flag on inode 598095810 clearing reflink flag on inode 598095811 clearing reflink flag on inode 598114240 clearing reflink flag on inode 598114242 clearing reflink flag on inode 598114243 clearing reflink flag on inode 598114244 clearing reflink flag on inode 598114246 clearing reflink flag on inode 598114247 clearing reflink flag on inode 598114263 clearing reflink flag on inode 598114264 clearing reflink flag on inode 598114265 clearing reflink flag on inode 598114266 clearing reflink flag on inode 598114267 clearing reflink flag on inode 598114268 clearing reflink flag on inode 598114269 clearing reflink flag on inode 598114270 clearing reflink flag on inode 598114271 clearing reflink flag on inode 598114272 clearing reflink flag on inode 598114273 clearing reflink flag on inode 598114274 clearing reflink flag on inode 598114275 clearing reflink flag on inode 598114276 clearing reflink flag on inode 598114277 clearing reflink flag on inode 598114278 clearing reflink flag on inode 598114279 clearing reflink flag on inode 598114280 clearing reflink flag on inode 598114281 clearing reflink flag on inode 598114282 clearing reflink flag on inode 598114284 clearing reflink flag on inode 598114285 clearing reflink flag on inode 598114286 clearing reflink flag on inode 598114287 clearing reflink flag on inode 598114288 clearing reflink flag on inode 598114289 clearing reflink flag on inode 598114290 clearing reflink flag on inode 1088114259 clearing reflink flag on inode 598114292 clearing reflink flag on inode 598114293 clearing reflink flag on inode 1088114271 clearing reflink flag on inode 598114294 clearing reflink flag on inode 598114295 clearing reflink flag on inode 598114296 clearing reflink flag on inode 598114298 clearing reflink flag on inode 598114299 clearing reflink flag on inode 598114300 clearing reflink flag on inode 598114301 clearing reflink flag on inode 598115714 clearing reflink flag on inode 598115715 clearing reflink flag on inode 598115716 clearing reflink flag on inode 598115717 clearing reflink flag on inode 598115718 clearing reflink flag on inode 598115719 clearing reflink flag on inode 598115721 clearing reflink flag on inode 598115723 clearing reflink flag on inode 598115728 clearing reflink flag on inode 598115730 clearing reflink flag on inode 598115733 clearing reflink flag on inode 598115734 clearing reflink flag on inode 598115739 clearing reflink flag on inode 598115740 clearing reflink flag on inode 598115742 clearing reflink flag on inode 598115744 clearing reflink flag on inode 598115745 clearing reflink flag on inode 598115747 clearing reflink flag on inode 598115749 clearing reflink flag on inode 598115750 clearing reflink flag on inode 598115751 clearing reflink flag on inode 598115753 clearing reflink flag on inode 598115756 clearing reflink flag on inode 598115757 clearing reflink flag on inode 598115759 clearing reflink flag on inode 598115763 clearing reflink flag on inode 598115765 clearing reflink flag on inode 598115766 clearing reflink flag on inode 598115767 clearing reflink flag on inode 598115770 clearing reflink flag on inode 598115771 clearing reflink flag on inode 598115772 clearing reflink flag on inode 598115773 clearing reflink flag on inode 598115774 clearing reflink flag on inode 598115776 clearing reflink flag on inode 598115777 clearing reflink flag on inode 598115778 clearing reflink flag on inode 598115779 clearing reflink flag on inode 598115780 clearing reflink flag on inode 598115781 clearing reflink flag on inode 598115782 clearing reflink flag on inode 598115783 clearing reflink flag on inode 598115784 clearing reflink flag on inode 598228417 clearing reflink flag on inode 598228418 clearing reflink flag on inode 598228420 clearing reflink flag on inode 598343178 clearing reflink flag on inode 598576950 clearing reflink flag on inode 598576955 clearing reflink flag on inode 598580393 clearing reflink flag on inode 598591356 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Thu Feb 3 10:02:15 2022 Phase Start End Duration Phase 1: 02/03 10:02:12 02/03 10:02:12 Phase 2: 02/03 10:02:12 02/03 10:02:12 Phase 3: 02/03 10:02:12 02/03 10:02:13 1 second Phase 4: 02/03 10:02:13 02/03 10:02:14 1 second Phase 5: 02/03 10:02:14 02/03 10:02:14 Phase 6: 02/03 10:02:14 02/03 10:02:15 1 second Phase 7: 02/03 10:02:15 02/03 10:02:15 Total run time: 3 seconds done looks like that was the issue. Thanks @JorgeB and @Squid I am amazed how you guys find the time to develop, have a family and be so responsive and helpful for the community.
  9. wow that is super not intuitive. How about DM-14 i am getting errors on DM-14, my guess is its the other ssd drive. I'm guessing the reason is that the dm-n, n = the drive number in the system as it is iterated through, guessing that SSD drives are counted after the array drives. When I removed 1 drive the cache went from DM-14 to DM-13. For future reference where in the diagnostic file is that information located?
  10. Here is the diagnostic file. Thanks titanium-diagnostics-20220203-0905.zip
  11. Ok I'm at a complete loss here on this XFS repair. I moved all data off of drive 14, took drive 14 offline then the xfs corrupted error went from dm-14 to dm-13. I performed a webgui repair on drive 13. Did not stop the error. Feb 3 06:42:17 Titanium kernel: XFS (dm-13): Unmount and run xfs_repair Feb 3 06:42:17 Titanium kernel: XFS (dm-13): First 128 bytes of corrupted metadata buffer: Feb 3 06:42:17 Titanium kernel: 00000000: 49 4e 41 ed 03 01 00 00 00 00 00 63 00 00 00 64 INA........c...d Feb 3 06:42:17 Titanium kernel: 00000010: 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 ................ Feb 3 06:42:17 Titanium kernel: 00000020: 98 8f 34 fb 84 88 ff ff 61 96 0a ae 34 dd db 6c ..4.....a...4..l Feb 3 06:42:17 Titanium kernel: 00000030: 61 96 0a ae 34 dd db 6c 00 00 00 00 00 00 00 1a a...4..l........ Feb 3 06:42:17 Titanium kernel: 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Feb 3 06:42:17 Titanium kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 7b 94 ec 72 ............{..r Feb 3 06:42:17 Titanium kernel: 00000060: ff ff ff ff 9c 27 7a e1 00 00 00 00 00 00 00 06 .....'z......... Feb 3 06:42:17 Titanium kernel: 00000070: 00 00 00 b1 00 09 b6 1c 00 00 00 00 00 00 00 00 ................ Feb 3 06:42:17 Titanium kernel: XFS (dm-13): Metadata corruption detected at xfs_dinode_verify+0xa7/0x56c [xfs], inode 0x20314439 I formatted drive 14 by converting from xfs encrypted to xfs back to xfs encrypted. xfs encrypted -> format -> xfs -> format -> xfs encrypted. Then the corruption error came back to dm-14. Feb 3 07:22:35 Titanium kernel: XFS (dm-14): Unmount and run xfs_repair Feb 3 07:22:35 Titanium kernel: XFS (dm-14): First 128 bytes of corrupted metadata buffer: Feb 3 07:22:35 Titanium kernel: 00000000: 49 4e 41 ed 03 01 00 00 00 00 00 63 00 00 00 64 INA........c...d Feb 3 07:22:35 Titanium kernel: 00000010: 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 ................ Feb 3 07:22:35 Titanium kernel: 00000020: 98 8f 34 fb 84 88 ff ff 61 96 0a ae 34 dd db 6c ..4.....a...4..l Feb 3 07:22:35 Titanium kernel: 00000030: 61 96 0a ae 34 dd db 6c 00 00 00 00 00 00 00 1a a...4..l........ Feb 3 07:22:35 Titanium kernel: 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Feb 3 07:22:35 Titanium kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 7b 94 ec 72 ............{..r Feb 3 07:22:35 Titanium kernel: 00000060: ff ff ff ff 9c 27 7a e1 00 00 00 00 00 00 00 06 .....'z......... Feb 3 07:22:35 Titanium kernel: 00000070: 00 00 00 b1 00 09 b6 1c 00 00 00 00 00 00 00 00 ................ stoped errors on dm-13 and moved back to dm-14 I then performed a Guix XFS-repair using -L to zero the log and then ran gui-xfs repair -V but I am still getting the error. I am beginning to think that dm-14 does not correlate to drive 14 that the corruption is somewhere else. If I run parity will it fix the corruption or will it just write the corruption to the parity drive?
  12. Tried running the xfx repair from the gui several times using the -v flag no luck. I'm considering the Aliens 2 approach and nuke from orbit. Currently moving data off the drive and will reformat. Hopefully that will work.
  13. I am still getting this error in my logs avery 15 sec or so. Feb 2 18:20:09 Titanium kernel: XFS (dm-14): Metadata corruption detected at xfs_dinode_verify+0xa7/0x56c [xfs], inode 0x20314439 dinode Feb 2 18:20:09 Titanium kernel: XFS (dm-14): Unmount and run xfs_repair Feb 2 18:20:09 Titanium kernel: XFS (dm-14): First 128 bytes of corrupted metadata buffer: Feb 2 18:20:09 Titanium kernel: 00000000: 49 4e 41 ed 03 01 00 00 00 00 00 63 00 00 00 64 INA........c...d Feb 2 18:20:09 Titanium kernel: 00000010: 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 ................ Feb 2 18:20:09 Titanium kernel: 00000020: 98 8f 34 fb 84 88 ff ff 61 96 0a ae 34 dd db 6c ..4.....a...4..l Feb 2 18:20:09 Titanium kernel: 00000030: 61 96 0a ae 34 dd db 6c 00 00 00 00 00 00 00 1a a...4..l........ Feb 2 18:20:09 Titanium kernel: 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Feb 2 18:20:09 Titanium kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 7b 94 ec 72 ............{..r Feb 2 18:20:09 Titanium kernel: 00000060: ff ff ff ff 9c 27 7a e1 00 00 00 00 00 00 00 06 .....'z......... Feb 2 18:20:09 Titanium kernel: 00000070: 00 00 00 b1 00 09 b6 1c 00 00 00 00 00 00 00 00 ................ I ran this twice. Not sure if this was answered but is (dm-14) drive 14? I was expecting a sd something like sda, sdb, etc.
  14. I ran with -lv flag This was the output. Phase 1 - find and verify superblock... - block cache size set to 6161160 entries Phase 2 - using internal log - zero log... zero_log: head block 1945344 tail block 1945344 - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:1945340) is ahead of log (1:2). Format log to cycle 4. XFS_REPAIR Summary Tue Feb 1 19:45:43 2022 Phase Start End Duration Phase 1: 02/01 19:42:47 02/01 19:42:47 Phase 2: 02/01 19:42:47 02/01 19:43:16 29 seconds Phase 3: 02/01 19:43:16 02/01 19:43:27 11 seconds Phase 4: 02/01 19:43:27 02/01 19:43:28 1 second Phase 5: 02/01 19:43:28 02/01 19:43:29 1 second Phase 6: 02/01 19:43:29 02/01 19:43:39 10 seconds Phase 7: 02/01 19:43:39 02/01 19:43:39 Total run time: 52 seconds done Yep, and you can sympathize with my poor wife that has to live with me.....
  15. OK, I have to give Credit to @Squid for this same solution, I ignored him because I didn't think i had any custom IP addresses. When you posted the log with the name of my server all up in my face, I checked my dockers and sure enough my speed test_tracker docker was using a custom ip. So my sincere apologies to Mr. Squid, who nailed this early on and thanks @JorgeB for pointing it out again. I made the changes to the docker settings hopefully this will work. Not sure if should start a new thread but since this likely due to all of the kernel crashes I thought i'd post it here. I have some corrupted XFS files. I saw spaceinvader's excellent video on how to fix xfs corruption and this thread But I have no idea what drive dm-14 refers to, I was expecting something like sda or sdb. Using an educated guess I ran the xfs-repair from the unraid GUI on drive 14 but that didn't seem to fix the issue. Below is the error message in the logs and a list of my unraid drives and their Linux name. Feb 1 17:58:02 Titanium kernel: XFS (dm-14): Metadata corruption detected at xfs_dinode_verify+0xa7/0x56c [xfs], inode 0x20314439 dinode Feb 1 17:58:02 Titanium kernel: XFS (dm-14): Unmount and run xfs_repair Feb 1 17:58:02 Titanium kernel: XFS (dm-14): First 128 bytes of corrupted metadata buffer: