segator

Members
  • Posts

    159
  • Joined

  • Last visited

Everything posted by segator

  1. Seems we can not run kubernetes natively on unraid if we don't enable docker. I would like to separate docker containers from kubernetes containers, if I run k3s automatically configure a containerd instance but I got this error. ERRO[2021-05-29T20:59:52.102559272+02:00] RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-86cbb8457f-cszbj,Uid:89b03757-5da6-4d20-a140-9544f5c940db,Namespace:kube-system,Attempt:0,} failed, error error="failed to create containerd task: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: rootfs_linux.go:118: jailing process inside rootfs caused: pivot_root invalid argument: unknown" Seems something is missing in unraid OS, maybe someone can help me to discover what is missing.
  2. I see, seems the kernel is built!! but I not sure yet how can I enable modules by hand. I need to enable ip_set_hash_* and ip_set_bitmap_* ip_set ip_vs ip_vs_rr also I think I need all regarding ip_conntrack. How can i enable them if I can not execute the make menuconfig? Thanks for your help and amazing work
  3. Hi, trying to use this tool to build kernel with some modules enabled for networking but the docker image tag 6.8.3 doesn't exists anymore I think and if I use latest then --One or more Stock Unraid v6.9.0 files not found, downloading...--- ---Download of Stock Unraid v6.9.0 failed, putting container into sleep mode!--- Even my unraid is at 6.8.3 also my other question is how i execute the makemenu to choose the modules? Thanks!
  4. the guide explains how to use real pendrive, I prefer to use a simple img file instead another USB
  5. I have 2 baremetal UNraid with their license. (the second one is a fusion of a NAS + desktop pc) it works well but i lose some performance for gaming and I tired so I want to move the UNRAID to the first box (already with unraid but as VM) to isolate version and permissions. How can i move my license to the VM without even use pendrive? I supose it is fine somehow to dump my pendrive to a img file and then load the img file to the VM? but then license will fails? what should I do?
  6. @steve1977 I have some questions for you rebuild a disk or parity check I supose when all disks are active is extremely slow, right? SMART per disk works well? Spin up/down disks does it works?
  7. wow so a lot of good news since last time I checked this post ZFS 2, official support Thanks @steini84 for your awesome help & contribution
  8. this will works? https://www.amazon.com/ORICO-External-Docking-Duplicator-Function/dp/B07MQCDVJ2/ref=sr_1_1_sspa?dchild=1&keywords=orico+5+bay&qid=1604833876&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEyWFJXVVpLRDlSUTRUJmVuY3J5cHRlZElkPUEwMzY1MDIzMlExSElGUjZFRTRNSiZlbmNyeXB0ZWRBZElkPUEwMjAzOTU3MUpFOURDVDZSQldETiZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=
  9. just thinking to extend my storage also with the orico of 5bay drives, did you @steve1977 were able to got this running? or any other USB enclosure
  10. Well the idea is to have it there 24/7 running to expand my array, some will be disks on unraid array (so i supose no problem as disks works independently, but i also plan to create another ZFS raid there(3disks).
  11. oh.. sorry what a newbie I am... yes says smart ok, so I supose if in the future some issues are detected I will be notified as any disk in the array, right¿? BTW someone tried ZFS over USB disks? is that a crazy idea? i am out of space on my rack server and i'm considering buying a 4slots usb enclosure. any other idea if not?
  12. interesting not on mine (those disks are a zfs pool) shall I need to do something else?
  13. question, how do you guys monitor the disk smart when using ZFS? unassgined devices plugin is not monitoring disks that are not mounted right?
  14. Memory issue fixed increasing blocksize in the dataset to 1M ashift=12 disabled dedup and compression on not needed volumes (like multimedia vols) -15GB of usage. @Marshalleq which error you have, running latest beta with gaming VM and ZFS RC2 (NAS + DEsktop PC all in one) all working fine after my fixes with the memory. in your logs I saw you have disabled the xattr?
  15. oh thats good point, I have also a dataset with some media files, i supose i will need to create a new one with dedup disabled and move data there, right?
  16. yes I remember readed about 1gb per TB of data in my case I have 10.4T of data thats then 10.4GB of ram so If I sum my VM ram + arc + dedup 24 12 11 means out of ram tootally.. nevertheless I reduced now to 20 8 11 and i'm still on 0gb of free ram
  17. in my case have a lot of benefits because the type of data I store on my volume. I build code and archive all the builds so the deduplication ratio is so high as the diference between build are a couple of kb. Anyway as far as I understood, teorically when linux need memory, ZFS free some memory of the arc cache, but its seems its not doing this, could be? also, do you know how this parameters affect when using ZFS on unraid? reccommended values in tip & tweak unraid plugin are: vm.dirty_background_ratio = 2 % vm.dirty_ratio = 3 %
  18. how is this possible? i have 24gb for the VM 10GB for ARC cache and running nextcloud with redis and mysql and some rubish dockers. how can I see the real ram usage of ZFS? (arc + metadata cache + dedup.. ) I have 10.4TB on my zfs vol btw
  19. total used free shared buff/cache available Mem: 47 42 2 1 2 2 Swap: 0 0 0
  20. time read miss miss% dmis dm% pmis pm% mmis mm% size c avail 11:24:21 0 0 0 0 0 0 0 0 0 12G 12G 3.3G
  21. Hey using from a couple of weeks ZFS on unraid with gaming VM as primary desktop pc (nas + desktop pc all in one), it works fine but sometimes unraid decides to kill my VM because "out of memory" i assigned 16gb of ram to the VM and the host have 64gb of ram, I think zfs is not cleaning enough fast the arc memory when other containers reclaim memory and then kernel decides to kill my VM i can fix it using hugepages but i don't like it because then its memory that ZFS can not use it when the VM is shutted down (that at the end is the 90% of the time), I tried to limit ZFS arc with echo 12884901888 >> /sys/module/zfs/parameters/zfs_arc_max but same
  22. nope, I still have same issue but the machine is more or less usable if there are not extreme load on the virtual ethernet device.
  23. running stable beta29 with desktop main pc Gaming Windows VM, host-passthrough issue with ryzen is fixed but anyway i still using host-model as I notice more performance (or at least is what i see with cinebench and aida64)
  24. Ops I forgot to mention this error d: do_drive_cmd: disk7: ATA_OP e0 ioctl error: -5 appears always that unraid trying to stop the SAS disks, thats why i'm saying it's not working as it stays on loop forever until log is full in a couple of weeks