Jump to content

skidz7

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by skidz7

  1. This plugin has been serving me well but recently started having issues. I previously had locked the version of my crashplan docker container to 23.02.2 (as I had issues updating further than that). A few days ago it stopped working so I figured an upgrade to the latest version might help, but continuing to have issues. Any suggestions of what I should be looking for or checking? Log below:
  2. Was getting similar errors with mine following a recent update, doing this brought it back.
  3. Did the same thing here after the latest update killed mine.
  4. Update - I uninstalled all plugins, downgraded unraid to the previous stable release, and so far having luck after re-creating my system and appdata shares, re-setting up my docker containers (They needed a serious cleanup anyways) - and so far making progress.
  5. Background: Last night Unraid quit responding, tried rebooting and it wouldn't come back up - was giving errors related to USB disk - realized the disk was dying - ended up having to replace it. (Sandisk Cruzer Micro - had it less than 5 months) Had a ZIP backup that was a month or so old and restored that to a new Flash drive/updated key, etc. Got things back online but realized quickly that the backup was from a moment in time prior to a cache drive swap I did, so essentially the system and appdata shares that were set to Cache:prefer suddenly were pointed to the old cache drive. I immediately backed up both the old and new cache drive to an unassigned disk - but now I'm having some issues getting the new cache drive to work properly. I've been able to format it, but the moment I set any share to cache:prefer and then initiate the mover for a share - I get a mover error that says "Tansport endpoint is not connected" and all of the shares disappear. I work in IT so I'm pretty familiar with technology - but I'm a little in the dark with some of the magic that Unraid does under the hood. I can't seem to get any caching to work properly following this USB replacement. I've done a lot of searching in the forum and on Reddit but my situation appears to be a unique combination of issues that I haven't seen someone else report/solve. Any advice or suggestions would be much, much appreciated. I did try rebooting in safe mode and that seemed to prevent the Transport Endpoint error with one of the test shares I created - I'm currently working on disabling different software/plugins to see if I can figure out what's affecting that. in the meantime I've attached diagnostics in case that helps. I do use Docker but have it disabled for the moment until I can iron all of this out. Here's an example of the error when trying to run mover: Jan 15 19:50:13 X1 move: create_parent: /mnt/disk8/system error: Transport endpoint is not connected Jan 15 19:50:13 X1 move: move_object: /mnt/disk8/system: Transport endpoint is not connected Jan 15 19:50:13 X1 kernel: shfs[10088]: segfault at 0 ip 00000000004043cc sp 0000152893a6d780 error 4 in shfs[402000+c000] Jan 15 19:50:13 X1 kernel: Code: 48 8b 45 f0 c9 c3 55 48 89 e5 48 83 ec 20 48 89 7d e8 48 89 75 e0 c7 45 fc 00 00 00 00 8b 45 fc 48 63 d0 48 8b 45 e0 48 01 d0 <0f> b6 00 3c 2f 74 43 8b 05 67 df 00 00 85 c0 78 2f e8 fe df ff ff x1-diagnostics-20220115-1941.zip
  6. Update from another who was having the same issue on 6.9.3 After upgrading to 6.10-rc2 - Same result Updated docker settings to use ipvlan instead of macvlan - seems to have resolved the issue. Have not noticed any issues with any of my docker containers as a result of this change. Uptime 3 days, 43 minutes ( Prior to this change, was seeing hard crashes prior to the 3 day mark) Will report back if there's any change.
  7. Just found this thread - I'm on 6.9.2 and have been having the same issue for some time now. What's been the result for those of you who have upgraded to 6.10rc? My unraid box: - Host access to custom networks: off - Using a combination of Bridge & Br0 w/ fixed IP addresses - Have multiple NICs in the server, also using bonded - Motherboard model: Supermicro X9DRi-LN4+ (I noticed someone else having the same problem was using the same model) - Was running a Unifi controller container I just today removed the Unifi container to see if there's any change. Considering the 6.10 update as another option depending on how that's worked out for others with this problem. Excerpt of syslog: Oct 26 10:19:46 X1 kernel: ------------[ cut here ]------------ Oct 26 10:19:46 X1 kernel: WARNING: CPU: 4 PID: 3970 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b> Oct 26 10:19:46 X1 kernel: Modules linked in: xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_m> Oct 26 10:19:46 X1 kernel: CPU: 4 PID: 3970 Comm: kworker/4:0 Not tainted 5.10.28-Unraid #1 Oct 26 10:19:46 X1 kernel: Hardware name: Supermicro X9DRi-LN4+/X9DR3-LN4+/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.4 11/20/2019 Oct 26 10:19:46 X1 kernel: Workqueue: events macvlan_process_broadcast [macvlan] Oct 26 10:19:46 X1 kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Oct 26 10:19:46 X1 kernel: Code: e8 dc f8 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 36 f6 ff ff 84 c0 75> Oct 26 10:19:46 X1 kernel: RSP: 0018:ffffc9000c764dd8 EFLAGS: 00010202 Oct 26 10:19:46 X1 kernel: RAX: 0000000000000188 RBX: 000000000000144d RCX: 00000000ba3cf88b Oct 26 10:19:46 X1 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffa03085fc Oct 26 10:19:46 X1 kernel: RBP: ffff8898dabb1b80 R08: 000000006d5ec060 R09: 0000000000000000 Oct 26 10:19:46 X1 kernel: R10: 0000000000000098 R11: ffff888107e9cd00 R12: 00000000000010ff Oct 26 10:19:46 X1 kernel: R13: ffffffff8210b440 R14: 000000000000144d R15: 0000000000000000 Oct 26 10:19:46 X1 kernel: FS: 0000000000000000(0000) GS:ffff88981fb00000(0000) knlGS:0000000000000000 Oct 26 10:19:46 X1 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Oct 26 10:19:46 X1 kernel: CR2: 00001528988c3698 CR3: 00000018e47b0006 CR4: 00000000001706e0
  8. I upgraded from a different author's crashplan container. Started with an empty appdata and have been re-linking my files per the instructions. I've noticed, though, that it seems to be re-uploading everything. I'm not sure why or if there's something I can check. In a couple of instances I went to the restore section on the website and saw that the files were there (in the old path) available for restore - but the files are still uploading again. In the new GUI it shows the backups at 0% with the entirety of the backup set still "to back up". Is there something else I should check or change to get it to recognize what's already there and skip so I don't have to re-upload TBs of data? Or any way to see whether the deduplication is actually functioning properly? EDIT: I may have figured it out. I just re-created the path that I had in the old container to point to /mnt/user and then removed the selection that was using the new path. It's currently "Backing up" my largest backup set but even though the # GBs backed up increasing it looks like that data isn't actually transferring so it must be actually deduplicating now. Not sure if the change of file paths kept it from doing that previously but adding the old path seems to have done the trick.
×
×
  • Create New...