nka

Members
  • Posts

    20
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

nka's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I just want to say that I've upgrade to 6.12.4 from 6.11.5 and it has been stable for me (6.12.2 was freezing every day, I do have 1 days and 6 hours now). Processus was : - Take note of all IP of Dockers using br0 (already have them, but just in case). - Disable Dockers. - Disable Network Bridge in Network Setting. - Back to Dockers settings to change from macvlan to ipvlan. - Enable Docker (and let them start). - Update all br0 dockers to eth0 (or VLAN) and set old IP again. - Disable Docker. - Update to 6.12.4. - Reboot. - Enable Docker. Thanks!
  2. Not sure mine were releated to macvlan, but I'll try again.
  3. Does 6.12.4 still crashing? I reverted back to 6.11.5 because 6.12.2 was freezing all the time.
  4. if I don't have any issue on 6.11.5 with macvlan, should/can I stay on macvlan? 6.11.2 was crashing on me (freezing) with ipvlan ou macvlan... so I rolled back and been waiting since.
  5. No trace for me (syslog not enabled), but I do have a Kernel Panic too on 6.12.2, never had any crash before. Here's a picture of the IPMI. Hope you'll have a fix soon... can I rollback to 6.11 ?
  6. Just want to add that I had the same problem today... refreshing the page made the process run in background, I don't know how many loop it did, but it was more than 4. What I did when this happen, is updating 5-6 dockers by hand... and then did the "update all". I noticed it did update the docker already updated... then updated all the others dockers... then started again in loop.
  7. thanks for the fix. Worked for me. Deleting and rebuilding the docker didn't fix it. mv -v /etc/nginx/conf.d/stream.conf /etc/nginx/stream.d/
  8. Just want to make a little update to that... one of my parity (dual parity) drive died... since I replaced it, everything work fine. I even remake the cache btrfs RAID1 with the two SSD and everything work fine... no more high CPU even for a long 45GB file transfert.
  9. Oh, just in case... I tried to remove the whole docker.img and re-create the vm from template. edit: But it's working fine if I got back from the original template (not re-using the one I have).
  10. I made a modification to a Docker to test another network configuration and I can't go back to bridge as it was before. I made some test with other Docker and they all do the same thing. Instant of mapping the port "80" to the port I specified, they are mapping the port I specified to the internal... therefor making it non-working. How can I fix this ? before : after :
  11. I know they are not the fastest, but they were cheap and was working great until I moved from XFS to btrfs for the RAID1 Cache Pool :(
  12. it's an old topic but, unRAID will see the drive from a 3ware controller... but will it report the raid health?
  13. Is there a list (wiki??) of SSD working in btrfs Cache Pool ? I got two Kingston SA400s37480G and they don't work (atleast on 6.8.2). Everything work fine in XFS, but I have no security. Same problems as this : I also posted here, will post the solutions on both: https://www.reddit.com/r/unRAID/comments/f4e9wx/working_ssd_for_cache_pool_btrfs/
  14. My bad... I mean 2 x 720GB. But you answed my questions, thanks!
  15. Hello, I've tried to search arounds but can't find clear informations. I currently got 2 x 480GB in my Cache pool. I got 480GB available. According to "btrfs fi df /mnt/cache" it's in "RAID1". According to https://carfax.org.uk/btrfs-usage/ , 2 x 480GB RAID1 give 480GB of usable space. Now, can I add another 480GB drive ? If yes, Am-I right to thing I will have 720GB of available cache ? If yes, will there be any performance lost ? I mean, if I had 3 x 480GB versus 2 x 360GB (I know they don't exist) for the same amount of space ? Thanks a lot!