caplam

Members
  • Posts

    335
  • Joined

  • Last visited

Everything posted by caplam

  1. Hello, I upgraded to beta 30 from 6.8.3 when it became available. In 6.8.3 i had the ssd write problem which killed 2 brand new ssd in 9 months. I replaced the pool with a single xfs formatted ssd. Yesterday i decided to setup a new pool with 2 brand new ssd (western blue 500Gb). I transfered all data from the cache (appdata, domains and system share, no other data) to the new pool (around 310Gb). It was 15 hours ago. Now here's a screenshot of smart data. with iotop -ao i can see that each container startup write almost instantanously 4G on loop2 i have around 35 containers running. All are stopped each night for backup of appdata. TID PRIO USER DISK READ DISK WRITE> SWAPIN IO COMMAND 25403 be/0 root 4.71 M 1263.98 M 0.00 % 1.00 % [loop2] 25946 be/4 root 0.00 B 111.59 M 0.00 % 0.09 % qemu-system-x86_64 -name guest=Her~rol=deny -msg timestamp=on [worker] 17588 be/4 root 0.00 B 90.50 M 0.00 % 0.07 % qemu-system-x86_64 -name guest=Her~rol=deny -msg timestamp=on [worker] 2795 be/4 root 0.00 B 87.25 M 0.00 % 0.00 % [kworker/u65:7-btrfs-endio-write] above is the result of iotop -ao after 10 min (containers are already started). Of course the new pool is formatted in btrfs with 1Mib aligned partition. I thought the write problem was solved in beta 25. Do i miss something obvious? edit: i decided to dig a bit more. I looked the ssd which was used as single xfs formatted cache drive. I set it up around the end of may until yesterday. I had 2 identicals ssd i used one for cache and keep the other. They wera around 41728932997 lba written (19TB with 512 byte sector) Today the ssd i used as single cache drive is at 107379774578 lba written (50TB) So i guess during 5 months unraid has written 31TB. It's around 200GB/day.
  2. it's now ok. Deleting the files is ok and syslog is now accessible again. So no need to reboot. Perhaps i'll try to browse my unraid server files with a macos machine to see if it's morking normally.
  3. thank you. It seems it did the trick. And i don't use anymore a macos machine. Can i just delete syslog.1 syslog.2 and samba/log.smbd.old ? I have not seen any other method than rebooting to clear logs. It's a bit a pain in the a..
  4. Hello, since i've moved to beta 6.9.30 (was on 6.8.3 before) my log is getting full with that message: Oct 10 05:18:19 godzilla smbd[30856]: [2020/10/10 05:18:19.035061, 0] ../../source3/smbd/dfree.c:140(sys_disk_free) Oct 10 05:18:19 godzilla smbd[30856]: sys_disk_free: VFS disk_free failed. Error was : Not a directory Do you have any idea why i have this error? godzilla-diagnostics-20201014-1733.zip
  5. you probably have user shares with use cache set to yes. It's normal behaviour. The mover moves data from cache to array the night for share with use cache set to yes.
  6. not sure to understand. I read release notes and it is said that the mover method is suitable when using multi device btrfs pool. I start with a single device xfs cache to transfer to a multi device btrfs pool.
  7. cool i didn't think to that as i had bad experience with the mover. But as soon as the parity protected array is not involved it should be quick enough. I've not seen screenshots of beta for now. So based on your answer i guess you can invoke the mover to move folders from a cache pool to another. 😀
  8. Hello, In my server i have an adapter like the one below: https://www.amazon.fr/gp/product/B071VLZVMX/ref=ppx_yo_dt_b_asin_title_o07_s00?ie=UTF8&psc=1 it has 2 sata ports M2 B key and one M2 pcie. For now i use only sata ports. I want to replace the 2 sata ahci ssd (almost dead by intensive writes). For the replacement i will buy one sata ahci ssd 960Gb. I already have a jetdrive 520 from a macbook air 2012 which i want to re use. But the jetdrive 520 uses an apple specific connector (but it's still sata ahci). So i'm looking for an adapter to plug the jetdrive 520 in the adapter above. Do you know where i can find that ?
  9. so you will have to give more details: how many nics in your server ? do you use lacp? if so check switch config and try with only one nic plugged. where is your dhcp server ? you have to search for every thing that can have an impact on your network.
  10. open a terminal and type du du -h -d1 /mnt/cache
  11. did you check you don't have duplicate ip on your network ?
  12. Hello, due to excessive write on cache pool in 6.8.3 i had to switch to a 1 disk cache formatted in xfs. Share on cache are system, appdata and domains. From what i read Beta releases seem to be quite stable. I don't like to run an unprotected cache despite the fact i have backups. So i'm considering moving to beta. I have around 40 dockers and 6 vm (without disk passtrough). I use lsio nvidia version (i have 2 gpu and one is used for plex and tdarr dockers). I have sata hdds in the main case and a sas external case (the cache disk is in the sas case) attached to a lsi 9207-8e controller. I have a single 860evo ssd as cache disk. Can i upgrade to beta, then create a new 2 ssd pool and move data from cache to new pool manually (of course with array offline)? After that i plan to reuse the single ssd (plus another one) to make another cache pool and then use one of the pool for appdata and cache for array and the second one for vms.
  13. lorsque j'ai fait la migration ça a planté également (mais pas à cause du réseau). J'ai du supprimer changer le repertoire mappé puis restaurer les fichiers de conf de l'ancien repertoire
  14. I've not found an operationnal libvirt plugin for telegraf. This seems odd as libvirt is used by so much people. I've found pr and issues concerning such a plugin but so far no one managed to get it working.
  15. I never tried to install telegraf on my vms as i would prefer to show host resources used by the wm.
  16. Hi, very nice setup, use of regex is a big improvement. Do you plan to add a panel for VM (with the same kind of stats as odcker panel) ? My setup is similar but i didn't find a simple way to add vm stats. Mine is also based on gilbn dashboard (i also added disk serial as device tag).
  17. wow, i don't know where i saw this but i'm using onlyoffice repository. Yesterday i had hard time trying to maake this work. Onlyoffice was not available (even locally) until i fixed the reverse proxy problem i had (deleted linuxserver/let's encrypt and reconfigured from scratch linuxserver/swag). Strange but it's now working
  18. hi, i'm using this docker for nextcloud and another container for onlyoffice. The container i use seems to be not maintained anymore (siwat's repository). It's still working in a web session but i'd like to change to a more actual version. Do you have any suugestion for onlyoffice docker repository? If i could keep my onloffice appdata it would be convenient.
  19. is it a sas drive? I think sas drive can not be spun down.
  20. i had the same problem and like you the solution was to destroy and recreate the vm with the same vdisk.
  21. i spend my all day pulling some images to use locally. I'm quite a bit disappointed as very few are working out of the box. I modified live_endpoint in boot.cfg and that part is ok. I installed a linuxmint vm with an iso as the image pulled (squashfs) couldn't boot with error message rootfs not detected despite the squashfs file was present. My vm installed with lvm automatic partitioning (which is not convenient as /home is in the root partition). I needed to resize lvroot to make a lvhome. So i thought with all utilities on the menu i will find one to resize my lvroot. I was able to boot gparted but it can't resize lv. Kaperski live booted load files from server but stopped with displaying no network connection which is strange as it downloaded files from the server. Rescatux boot but stop at some point complaining about the lack of screen. System rescue cd stopped on error failed to mount /dev/loop0 I can boot debian live but the keyboard is not responsive i have to hit 2 or 3 times the keys, not very convenient when locales are not set correctly. When i try to reconfigure locale the booting process shows up as soon as i try to move cursor. I continue my tries but the learning curve will probably be too steep for the time i have.
  22. thanks for your answer. Unfortunately i have a very weak connexion so boot from images stored on github is not an option for me. I have to store all images i want locally. I guess i will start with a vm to try. When i modify an entry in boot.cfg or in a submenu do i have to restart docker ?
  23. I was just looking for some documentation about pxe booting as i'm fed up with usb key i always lose. Thanks for that nice docker, it looks awesome. For now i know nothing about pxe booting and how to use netboot.xyz. Docker is pretty easily setup but now i've to learn how to add iso that are not maintained remotely. I see that you can pull images to have it locally accessible but i don't know the difference between the different options. For example if i want to install a debian stable what version do i have to pull: squashfs, vmlinuz or initrd ? if i want to install windows using the provided menu, i guess i have to put an win iso somewhere but where ?
  24. i have 7 folders. I see 6 on dashboard and 4 on docker page. attached console when browsing docker page.