Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

6 Neutral

About HNGamingUK

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I could be thinking things wrongly but my understanding is that @CHBMB is stopping unraid deleopment so no plugins for unraid from them... I am unsure how this affects dockers since they can and always have been able to manually install them pulling directly from DockerHub instead of using CommunityApps. It would be nice (if not already provided somewhere) to have offical word from the whole @linuxserver.io team. But I agree with multiple replies that if there was better communitcation from @limetech then @CHBMB quiting development likely would not of happened.....
  2. For reference to anyone in the future I have fixed this by doing the following: Removed br0.20 from unraid network settings Set Windows VM network to br0 Set vlan tagging inside the Windows VM for VLAN 20 This has allowed for my Windows VM to connect to the correct subnet and also allows me to manage my unraid webui and docker uis also.
  3. I would check the "Cache" setting of the share in question and confirm that it hasn't been set to "No"
  4. Hello, So as the title suggests I am unable to access my dockers that are in default bridge mode from a Windows VM (on unraid) from a VLAN (br0.20) Here is what my unraid network config looks like: Here is a screenshot of my dockers : While on my Windows VM (which has it's network as br0.20) I am unable to access the docker containers on the standard bridge mode (eg Grafana) however I am able to access the dockers using custom networks (eg UNMS). My network setup as follows: br0 (Standard bridge) is on a subnet br0.20 (VM assigned) is on a subnet br0.5 (2 of the dockers) is on a subnet I am able to access any of the dockers webUIs from a mobile device on the same subnet as my Windows VM. I believe this has something to do with the network isolation between dockers and VMs am I right? How would I go around allowing access from the Windows VM to the dockers on the default bridge?
  5. Okay brilliant so setting the following in Auto Update: and then the following in backup: Will backup every Sunday at 3:05 and once complete trigger an update.
  6. Could someone explain how the "Update Applications On Restart?" setting works on this plugin? If I have this set to yes what should I set in the "CA Auto Update Applications" settings?
  7. Yeh I didn't want my VM to die randomly as I use it for gaming. I will have a look at working out some better docker memory limits. Thanks for helping me understand how it works.
  8. Yes I set the MySQL and other docker limits before MySQL was getting killed. I assumed that if the container got close to the limit it would just not use more, but reading your messages suggests that if it goes over the limit I set it will just kill the docker?
  9. Hello, For reference my server has 32GB of RAM. I wonder if anyone else is able to help me, I am unsure on what is causing this and what I can do to stop it. So far for the past couple of nights I have had fix common problems send an alert about out of memory errors and when I login I see that the MySQL docker has stopped. (Only this one stops so I am guessing OOM killer decides to kill that one for whatever reason) However my memory on the dashboard is hovering around 70% max so I am unsure on what is causing it. I have set each docker to have max memory so that they can't randomly use more memory, however it still keeps happening. The only other item that I run contstantly is a Windows VM (16GB assigned to it) and as such should leave 16GB for the dockers and system. I have also attached my diagnostics, if anyone can help me find out what is causing this it would be wonderful. I don't have the capcity to just upgrade my RAM currently and so need to fix this issue. diagnostics.zip
  10. Okay great that information helped. As I had the secondary cache working and all the data was there I just copied it from the single drive cache pool to the array. I then stopped the array, assigned the primary cache back and started the array. I then ticked the format box and started the format of the primary cache. As expected this formatted the cache pool, so all I had to do is restore the files to the cache and everything was back to normal. Just so I know for future, reading the errors shown can someone explain what happened?
  11. Yeh it seems that way, unsure if it is possible to do with the current setup unraid/lime tech use. But it would be a nice feature to allow live disk expansion as most newer operating systems would support it.
  12. Hello, Wondering if anyone can help me with an issue I am having with my Cache pool. Basically after a reboot today, when I started up the array I saw the following: Along side this I see the below asking me if I want to format the primary cache drive. Now obviously I do not want to as I don't want to loose data. I then ran a btrfs check --readonly /dev/sdb1 which came with the following output: Opening filesystem to check... Checking filesystem on /dev/sdb1 UUID: ab81d341-8531-4c08-8fa1-645911b301fd cache and super generation don't match, space cache will be invalidated found 310226411520 bytes used, error(s) found total csum bytes: 0 total tree bytes: 90783744 total fs tree bytes: 29900800 total extent tree bytes: 60588032 btree space waste bytes: 16964433 file data blocks allocated: 156191051776 referenced 154714845184 I then did a btrfs check --repair /dev/sdb1 which showed the following: Starting repair. Opening filesystem to check... Checking filesystem on /dev/sdb1 UUID: ab81d341-8531-4c08-8fa1-645911b301fd [1/7] checking root items Fixed 0 roots. [2/7] checking extents incorrect offsets 12845 12358 incorrect offsets 12845 12358 incorrect offsets 12845 12358 incorrect offsets 12845 12358 Shifting item nr 94 by 487 bytes in block 1254219743232 Shifting item nr 95 by 487 bytes in block 1254219743232 Shifting item nr 96 by 487 bytes in block 1254219743232 Shifting item nr 97 by 487 bytes in block 1254219743232 Shifting item nr 98 by 487 bytes in block 1254219743232 Shifting item nr 99 by 487 bytes in block 1254219743232 items overlap, can't fix check/main.c:4336: fix_item_offset: BUG_ON `ret` triggered, value -5 btrfs[0x42f27d] btrfs[0x43842d] btrfs[0x438960] btrfs[0x43950c] btrfs[0x43d495] btrfs(main+0x90)[0x40ecc0] /lib64/libc.so.6(__libc_start_main+0xeb)[0x1473a7ed6e5b] btrfs(_start+0x2a)[0x40ef4a] Aborted So at this point I am stumped on what to do to resolve the issue. I have been able to set the primary cache as no device and then start the array at which point it seems the secondary cache drive is working fine and has all the data I need. However preferably I do not want to run at a reduced redundancy for an extended period of time. If I am infact being stupid and do need to format the primary cache drive to then make it re-sync the raid I can do that. Any help infact would be very much apprriciated. Appologies this post is VERY long, I just wanted to make sure that I covered all the options I could find before asking for help.
  13. For Proxmox here is the documentation: https://pve.proxmox.com/wiki/Resize_disks Although looking further it may seem that Proxmox uses a different vdisk handling method not sure myself. I just know it is possible with Proxmox both in the GUI and via CLI
  14. Proxmox would be my example of live disk increase, it uses QEMU KVM.
  15. Hello All, I was wondering if someone will be able to help me. I am running a Windows VM and preferably don't want to have to turn it off to increase the disk. From what I can see it can't be done via the GUI? Is this a new feature that needs to be requested? In either case what would be the best command to run to increase the disk? Many Thanks