seneo

Members
  • Posts

    13
  • Joined

  • Last visited

seneo's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I have a similar issue, the config file doesn't seems to have changed, but the save path of each torrent as been change to /config/qBittorrent/downloads and qbitorrent is now moving all the files to the inside of the container (into my cache drive). Reverting to the previous image (4.3.9-2-01) seems to fix the issue.
  2. I'm trying to found a way to use hardlinks to keep seeding content through my torrent client after the content is imported by radarr/sonnarr. Right now the `use hardlink` option does not work as I have a volume for my seedbox share and another one for the TVShow/Movie share, and for sonarr/radarr they are considered as different file system and so not able to make hardlinks between them. I'v seen recommanded many times the usage of one share 'Media' containing folder 'Seedbox', 'TVshow', 'Movie' for example to go around the previous issue, but I'm not very found of this solution as I'd rather keep different share. So I was wondering about using `/mnt/user/` directly as a volume for the different containers (Torrent client, sonarr, radarr...). Except for the fact that it would be giving this containers access to the whole array and not only specific shares/data, what could be the drawbacks/issues with this solution?
  3. One week since I put the new ram and no issue since (no warning, no errors, no docker crash etc...). You were definitly right.
  4. I've just ordered a graphic card and new memory sticks. Let's hope that the source of all the issues. I was already thinking of ram but with you input I'm more confident that the main issue. Thanks for your help.
  5. Here are the full log. The hardware hasn't change since I built it one year and a half ago. Except for the ssd that was dead and replace for a nvme drive in June. The issues does seems to have appeared during this year, so I don't think it's related to an overclock of the ram. I'm not able to run a memtest since I don't have a graphic card. intersect-diagnostics-20201222-1344.zip
  6. I keep getting issues with docker. Docker crashing randomly until the full docker service get KO (won't start/stop, image corrupted etc....). I had to re-create the docker image 3 weeks ago due to this kind of issue Last week I couldn't event start docker without the system getting stuck I restarted docker on Saturday. Two days ago Influxdb randomly stopped. I restarted it and it's now working since. This night I had CA Backup/restore scheduled and it seems to have cause an issue with Bazarr. Bazarr wasn't working (but started), constant `OSError: [Errno 5] I/O error`in bazarr's log. And in the server's syslog I see continous. BTRFS warning (device loop3): csum failed root 1069 ino 8067 off 24199168 csum 0x4bd3e39b expected csum 0xc589050e mirror 1 The warning stopped when I stopped Bazarr My cache/ssd is xfs formated so the warning is related to the docker image. I ran a scrub operation that detected 7 error but could'nt correct them: Error summary: csum=7 Corrected: 0 Uncorrectable: 7 Unverified: 0 And that just the start, I'm pretty sure that by the end of the week the full docker stat won't be working. I'm not sure what could be the source of all this issues. I plan to reformat the ssd/cache drive but I'm not sure that will definitely fix the problem. Does anybody is familiar with this kind of issues or could give me a hint to what could be the source? syslog_2020-12-22.log
  7. Machine re-booted, array started and no more warnings for now. Thank you both for your help and your quick answers. I'll mark the topic as solved in a couple hours just to be sure (the warning wasn't always immedialty after starting the array).
  8. Done. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... sb_fdblocks 75199576, counted 76180631 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (109:3726193) is ahead of log (1:2). Format log to cycle 112. done Does it means it should be back to normal now and I can re-start the array? At least for this issue.
  9. The outpout without the -n is as follow ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. I'm not sur what 'Mount the filesystem to replay the log, and unmount it before re-running xfs_repair.' means in this context. Does it means I should start/stop the array before re-running xfs_repair in maintenance mode?
  10. Ok so when I use the `check` button of the `Check Filesystem Status`section of disk1 I can see the following log: Phase 1 - find and verify superblock... - block cache size set to 736264 entries Phase 2 - using internal log - zero log... zero_log: head block 3726195 tail block 3726191 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 75199576, counted 76180631 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Fri Dec 18 20:18:39 2020 Phase Start End Duration Phase 1: 12/18 20:15:59 12/18 20:15:59 Phase 2: 12/18 20:15:59 12/18 20:16:05 6 seconds Phase 3: 12/18 20:16:05 12/18 20:17:42 1 minute, 37 seconds Phase 4: 12/18 20:17:42 12/18 20:17:42 Phase 5: Skipped Phase 6: 12/18 20:17:42 12/18 20:18:39 57 seconds Phase 7: 12/18 20:18:39 12/18 20:18:39 Total run time: 2 minutes, 40 seconds Should I retry without the -n option as mentionned in 'phase 2'? I'm scared to do more harm than good without the 'no modification' option.
  11. I'm trying to troubleshoot recent issues that I have recently with my unraid server (unraid version 6.8.3 2020-03-05). I have many issues regarding docker, container crashing/stopping, full docker system crashing etc... but right now I'm focusing on some warnings that pop when I start the array: kernel: XFS (dm-0): Metadata corruption detected at xfs_dinode_verify+0xa5/0x52e [xfs], inode 0x18c72912a dinode I'm not able to identify the disk in question, I assume that dm-0 correspond to md0 but looking at the logs I don't see this mount point. The only disks I don't see mounted in the logs are the parity and the flash drive but they are not using XFS. The warnings seems to appear only after starting the array after booting, not after stoping/starting the array. Does anybody have any idea? intersect-syslog-20201218-1836.zip
  12. That seems to fix my issue. I still have the same speed copying from my current data drive (through network) to the SSD and HDD (around 100MB/s) but from /mnt/disk1 to /mnt/cache is now faster that from /mnt/disk1 to /mnt/disk1. But that seems more normal to me :slower while going through the network, and copying to HDD slower than to the SSD. Thanks a lot.
  13. Hi everyone. I just build my first PC last week to be used as a home server with Unraid (6.5.3 currently installed with a trial key). Everything seems to work fine and Unraid exceed my excpectations. Except that I have write speeds that seems odd. I tried to rsync (with -avzh options) some data to my array and it seems that I always have pretty muche a transfert speed of 30MB/s. I tried to transfer: from my current data drives (connected to my iMac via usb3 and shared by OSX via smb) to one of the two hard drives currently in the array (via /mnt/disk1/ instead of using the user shares and in a folder associated to a share setup to not use cache), from the same previous source to my cache drive (an SSD, via /mnt/cache), from /mnt/disk1 to /mnt/disk1 (in a folder associated to a share setup to not use cache), fromt /mnt/disk1 to /mnt/cache I do not have a parity drive. The hardware is the following: https://fr.pcpartpicker.com/list/qdvXfH It seems strange to me to have the same speed transfering from external drives and from internal drives, and even stranger to have the same speed transfering to an HDD and to an SSD. Is that a normal Unraid thing that I am not aware of or there is something fishy in my setup? As stated it's my first Unraid setup and even my first PC build so it could be totally normal. Thanks for your help (and sorry for my broken english).