caplam

Members
  • Posts

    334
  • Joined

  • Last visited

Everything posted by caplam

  1. I can't also install it. I tried to remove another plugin with success (fix common problems) and i have been able to had a plugin too (controlir).
  2. Now the add folder button has disappeared. It seems i cannot completly delete the plugin. I now have an error : plugin: removing: docker.folder.plg plugin: file doesn't exist or xml parse error And when i open the plugins page i see this (even if it has been deleted):
  3. Great plugin. Unfortunately i have quite a mess. I created several folders but were not showing up, and dockers in these folders were not showing either. When creating a folder i have a deleted docker which shows up. I decided to restart from scratch by deleting plugin, but the add folder is still present. After reinstalling plugin i have 2 warnings when adding a folder: Warning: file_get_contents(/boot/config/plugins/docker.folder/folders.json): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/docker.folder/include/import-export.php on line 2 Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/docker.folder/include/import-export.php on line 10 edit: i deleted filles and folder in /boot/config/plugins and in /usr/local/emhtpp/plugins. No more add folder button but when i reinstall plugin i have the following error: plugin: installing: https://raw.githubusercontent.com/GuildDarts/unraid-plugin-docker.folder/master/plugins/docker.folder.plg plugin: downloading https://raw.githubusercontent.com/GuildDarts/unraid-plugin-docker.folder/master/plugins/docker.folder.plg plugin: downloading: https://raw.githubusercontent.com/GuildDarts/unraid-plugin-docker.folder/master/plugins/docker.folder.plg ... done plugin: file doesn't exist or xml parse error
  4. Answer to myself: I rebooted with an efi shell on usb key. The efi shell had to be named bootx64.efi and placed in /EFI/boot on a FAT32 usb key. At the root directory you must place the bin file (name according to your card), the sas2flash.efi and the mptsas2.rom As my card was in P20 firmware i used the P20 efi installer. Be sure to note the sas address of the card before flashing. In my case the difficulty was to find a suitable shell efi file for my server (a hp Z620). I had also no idea on how to have an efi bootable usb key. Now it's done. I'm running P16 firmware. sas2flash -list LSI Corporation SAS2 Flash Utility Version 20.00.00.00 (2014.09.18) Copyright (c) 2008-2014 LSI Corporation. All rights reserved Adapter Selected is a LSI SAS: SAS2308_2(D1) Controller Number : 0 Controller : SAS2308_2(D1) PCI Address : 00:03:00:00 SAS Address : 500605b-0-098c-fb80 NVDATA Version (Default) : 10.00.00.04 NVDATA Version (Persistent) : 10.00.00.04 Firmware Product ID : 0x2214 (IT) Firmware Version : 16.00.00.00 NVDATA Vendor : LSI NVDATA Product ID : SAS9207-8e BIOS Version : 07.31.00.00 UEFI BSD Version : N/A FCODE Version : N/A Board Name : SAS9207-8e Board Assembly : H3-25427-02H Board Tracer Number : SV43732784 Finished Processing Commands Successfully. Exiting SAS2Flash. I can now trim my evo 860 unassigned ssd. fstrim -v /mnt/disks/Samsung_SSD_860_EVO_500GB_xxxxxxxxxxxxxx/ /mnt/disks/Samsung_SSD_860_EVO_500GB_xxxxxxxxxxxxxx/: 219.4 GiB (235542450176 bytes) trimmed Thank you @johnnie.black for P16 firmware tip.
  5. ovf is an xml file. It describes the vm. I don't think it's directly compatible with xml used in unraid but you may want to try by copying pasting xml in the vm description (advanced view) for qcow2 you have to define disk device with qcow2 type. I think (but not sure ) you have to do this in the xml. <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/cache/domains/Windows8/vdisk1.qcow2' index='3'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>
  6. so it's the same as a one pass preclear. They really used the "low level format" term.
  7. I have ssd i want to test western digital dashboard app which exist only on windows. I passed trough the ssd to a windows vm : <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-WDC_WDS500G2B0B-00YS70_XXXXXXXXXX' index='2'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </disk> Windows sees the disk but not as a western digital ssd. It's a red hat virtio SCSI disk device and so western digital dashboard doesn't detect any ssd. Is there a way to have the ssd (or any disk) detected as if it were physically connected to the windows vm ?
  8. 2 of my ssds have huge numbers in smart attributes (host writes, nand write) and smart attribute reported incorrect is growing. I was told by western digital to do a low level format. I seriously doubt the necessity of that procedure as i think my ssd are near their end of life. However i was wondering if there is a difference between a low level format and a preclear for a ssd.
  9. Just a small bump for an advice. My hba is a 9207-8e : sas2flash -list LSI Corporation SAS2 Flash Utility Version 20.00.00.00 (2014.09.18) Copyright (c) 2008-2014 LSI Corporation. All rights reserved Adapter Selected is a LSI SAS: SAS2308_2(D1) Controller Number : 0 Controller : SAS2308_2(D1) PCI Address : 00:03:00:00 SAS Address : 500605b-0-098c-fb80 NVDATA Version (Default) : 14.01.00.06 NVDATA Version (Persistent) : 14.01.00.06 Firmware Product ID : 0x2214 (IT) Firmware Version : 20.00.07.00 NVDATA Vendor : LSI NVDATA Product ID : SAS9207-8e BIOS Version : 07.39.02.00 UEFI BSD Version : 07.27.01.01 FCODE Version : N/A Board Name : SAS9207-8e Board Assembly : H3-25427-02H Board Tracer Number : SV43732784 Finished Processing Commands Successfully. Exiting SAS2Flash. I downloaded packages from broadcom. Should i downgrade bios and/or uefi also ? What is the best way for downgrading ? Could it be done from unraid cli with array stopped ? edit: forgot to mention i want to downgrade to P16 to use 860 EVO ssd.
  10. For what it worth i'm concerned with the high writes. In 6 months 500TB have been written to my ssd. They should be dead (given for 300TBW). My cache was a btrfs pool of 2 WD Blue m2 sata 500Gb. I still use one of them as xfs cache and write rate has drastically drop. I think i'll go for single 970 pro as i may have a good deal on one 1Tb drive. But i'll wait before setting up a new btrfs pool. To answer to the initial question i don't see an interest in a mechanical pool for cache. The cache is interesting mainly for the speed. Perhaps we will have good surprises in future releases.
  11. Hello, i'd like to have some advice on storage controller. Here is my situation. I have an HP Z620 with 6 sata HDD on 3 GB/s ports. I also have 2 M2 sata ssd on pcie bracket on 2 6 GB/s ports (the only ones of the motherboard). I have a LSI 9207-8e hba (P20 firmware) sas2flash -list LSI Corporation SAS2 Flash Utility Version 20.00.00.00 (2014.09.18) Copyright (c) 2008-2014 LSI Corporation. All rights reserved Adapter Selected is a LSI SAS: SAS2308_2(D1) Controller Number : 0 Controller : SAS2308_2(D1) PCI Address : 00:03:00:00 SAS Address : 500605b-0-098c-fb80 NVDATA Version (Default) : 14.01.00.06 NVDATA Version (Persistent) : 14.01.00.06 Firmware Product ID : 0x2214 (IT) Firmware Version : 20.00.07.00 NVDATA Vendor : LSI NVDATA Product ID : SAS9207-8e BIOS Version : 07.39.02.00 UEFI BSD Version : 07.27.01.01 FCODE Version : N/A Board Name : SAS9207-8e Board Assembly : H3-25427-02H Board Tracer Number : SV43732784 Finished Processing Commands Successfully. Exiting SAS2Flash. Hooked to the lsi i have a jbod das with 12G ability (no expander) The 2 m2 sata ssds are quite dead due to what is probably a bug between docker btrfs driver, btrfs pool and unraid (it still has to be processed). So i need urgently to change the ssds. For now i use a one drive cache formatted in xfs on one of the 2 m2 ssd. I have 2 samsung 860 evo sata which are in good shape (20TB written). I have no room inside the HP for the sata ssds. I don't absolutely need 12G link for my jbod. I'd like to have a cache pool but for now it's not really an option as btrfs is mandatory. From what i've read lsi 9207 (and all lsi sas2) does not support trim with samsung ssd (and other ?) with P20 firmware. Can i downgrade lsi 9207 to P16 firmware which seems to be compatible with trim ? How can i do that from within unraid or bios ? What hba would be compatible with unraid and trim ? preferably sas3 if i have to change. I'm also looking for a strong ssd and i'm waiting for an answer on a good deal on a 970 pro 1TB. If i go for a sas3 controller i could use sas ssd with great endurance.
  12. that's what did. Mover took 3 hours to move appdata/plex (13G).
  13. Back on track. To make this short. I change my cache from btrfs to xfs. In one hour loop2 has written 1400MB. So it seems good. The downside is all my vms and dockers are on cache without redundancy. I didn't have to redownload dockers. I simply move cache content to an unassigned ssd, made the change on cache and move back cache content. You can't count on mover as it's so slow. It moved only plex appdata folder (13GB) in 3 hours. Now i'll wait to a fix, i prefer to have a cache pool than a single drive. And i have to find 2 new ssd.
  14. i unassigned parity2, restarted the array and invoked mover and..... speed is the same between 2 and 5 MB/S. There are no other writes except disk2 and parity1. It should be quicker .
  15. ok. I can't because mover is running. Can we stop it ? edit : simple as : mover stop
  16. yes i have read errors on it. So was parity 1 and have it replaced. I have no other disk to replace parity 2. Can i unassigned it and go on with only 1 parity drive ? edit: i'm running a smart test. Last week one was ok.
  17. Hi, it's not a good period for my unraid server. Having big trouble with write amplification on my cache pool i decided to move from btrfs cache pool to single xfs cache disk until there is a correction. I shut down docker and vm service. I'm moving cache share to array with the mover but it only writes at 1,5MB/S. All cache shares (except an empty one ) have been set to cache: yes and included disk:2 Since few days i have so many troubles with this server, that i'm considering moving to another os. How can i stop the mover and transfer share from cache to array manually ? I also have a big question. From what i understood docker.img is btrfs filesystem handling cow. When loop2 is mounted we can see /docker/btrfs/subvolumes tree. Is this handled correctly on xfs cache ? godzilla-diagnostics-20200531-1213.zip
  18. Hi, I have a strange issue with some vm. Due to high writes on cache killing my ssds i decided to move vm to an unassigned drive for now. Vm are stopped. I copy vdisk to the new location. libvirt.img had been deleted. So i have to import vmsettings. For some vm i imported xml, pointed it to new vdisk location and fire up the vm with no problem. For other vm (pihole on debian and windows8 ) if the vdisk is on unassigned drive, it's not detected. If i move back to cache it's detected and the vm boots normally. What can be the problem here ? edit: stange but solved by removing et recreating vm.
  19. i tried to copy /var/lib/docker but failed (out of space). There must be some volumes that are mounted in /var/lib/docker/btrfs because when looking at the size of btrfs directory it's 315GB (docker.img is 60GB with 50GB used)
  20. i've dropped the use of docker compose. The 3 docker i recreated were recreated with dockerman Gui and still only 2 appear in the gui even when they were started. docker template in unraid is very convenient. In my case there is a big downside: i have a poor connection so redownloading 60+ gig of images will take me around 2 days if shut down netflix for the kids. 🥵 I wish i could extract images from docker.img.
  21. i think problems with my vm were related to qcow2 image format. I converted it to raw img and now kb written to disk are coherent between inside the vm and unraid. Cache writes seem to stabilize around 800kb/s for that vm. edit : could be related to btrfs driver. I downgraded to 6.7.2 and for the vm the problem is still there. I think now i have to upgrade to 6.8.3 and apply the workaround of @S1dney Do i have to delete docker.img or move it elsewhere ? Mine is 80GB so it takes lot of space. edit2 : i applied the workaround in 6.7.2. I have to redownload all docker images (with a 5Mb/s connection it's a pain in the ass). I've redownloaded 3 docker but only 2 appear in the docker gui while portainer see them all.
  22. i’ve just found a problème with a vm. it’s a jeedom vm (home automation): debian with apache,php ans mysql. it has a single vdisk and it’s on cache. i ran iotop in unraid and in the vm. results were extremely differents during 10-15min i saw hundreds of MB written to disk by qemu process in unraid. During the same time i saw few MB writen to disk by mysql in the vm. result on cache disk was around 15 MB/s. it’s approximately half of the troughput i had during months. speaking of the table i posted above, figures come from diagnostics files. When interval time is short (1 or 2 days), accuracy is not very good as i have not taken in consideration the time of the day the diagnostic file has been downloaded.
  23. thanks you for your answer. right now i have downgraded to 6.7.2 but it doesn’t solve all my problems.
  24. i'm downgrading to 6.7.2, hope i'm not doing a mistake
  25. I think i made a mistake as my previous post was deleted. Anyway i was writing i'm concerned too. I'm trying to find a workaround as unraid starts to throw alerts on both my ssds. The 187 Reported uncorrect attibute is growing. I'm really pissed off with this situation as i was planning to replace my procurve switch with a unifi one but now i have to buy ssd as if it was ink cartridges for my printer. 👹 Docker service is stopped, i have only 2 vm running and i still have 6MB/S writes on ssd. As unraid (or me probably ) is not doing things right all the time i take diagnostics from time to time. So i searched in history the starting point of the problem. Attached is a spreadsheet with smartdata of one of the ssd. Seeing this it seems pretty obvious things started going crazy with the 6.8.0.