caplam

Members
  • Posts

    330
  • Joined

  • Last visited

Everything posted by caplam

  1. It makes sense. I only ran 1 pass preclear for testing. It's a brand new disk. And i had 3 other disks in preclearing process. All drivres are exos (3x16TB 1x8TB). These are getting really hot when running preclear mounted in the same das case. I had the impression mover was stalled. There were no reading or writing on the array. As soon as i resumed there has been writing only on the parity disk being rebuit.
  2. so it could spare time, if the disk has been precleared, to end parity sync. Right now parity sync has been paused because of mover action and i had to manually resume. And yet parity disk being rebuilt is the only one disk being accessed by the array.
  3. I am expanding my array by changing for bigger drives. Before that i precleared the drives to test them. I have 2 6TB parity drives replaced with 16TB drives. I was thinking that it would be nice if parity rebuild could end when the biggest of the disks has been read if the disk being rebuild has been precleared. I don't know if i'm clear. In my case, older disks are slow, rebuilding has reached 6TB and there is 10TB more to go but nothing is read from other disks as they are smaller. I assume the rest of the parity sync is like zeroing the disk which has already been done by preclear.
  4. if i remember correctly you can gather a share data on a disk. I can't remember if you can select several disks as destination.
  5. I read the mentionned topic. Many useful stuff here. For now i'm upgrading my array. 4 preclear done, rebuild of first parity disk is running, waiting to upgrade 2nd parity and 2 array disks. As i have more archive than backups i wonder if i wouldn't go for a lto streamer and keep 1 or 2 spinners for backups.
  6. you need to be more precise. First thing is to estimate your storage needs (present and future). After that you can choose your drives according to the number of sata, sas ports on your server. For me (in france ) the best price/capacity ratio drove me to upgrade my array with 16TB exos drives.
  7. i think you'll have to reset aio installation. You need to follow exactly the setup process particularly for volumes which setup is a bit weird (not like other docker volumes). Pay attention to folder permissions When i set up mine i had to restart as the volume was not setup correctly.
  8. look at unbalance. It could you gather your data.
  9. yes but can you split this space onto 3 or 4 drives?
  10. Today I backup my data with several apps: - ca backup for flash, appdata - borg for nextcloud aio - luckybackup for several user shares -vmbackup for vm all these backup are written to user share (on the unraid array) then with luckybackup they are transfered to an unassagined usb drive The size of the backups fits on the drive. Now i plan to upgrade array and i will use actual parity drives as archive drives. The goal is to keep a copy of media files on the archive drives. I plan to update these drives from time to time (2 or 3 times a year, no need to do it more often). Today media files are not backed up. The problem is media files size is 3 times the archive drives size. How can i do to backup media files to archive drives and update them with new content 3 or 4 months later ?
  11. As indicated in the name Unraid is not raid. What unraid doesn't like is sata port multiplier. Unraid has to be presented the disk itself by the hba. In the inwin case you have a integrated broadcom sas expander on the backplane which is perfect for Unraid. However your motherboard has to have an hba. Depending on the "disks" you want, you need a sas hba and a nvme hba. According to the manual you can use 24 2,5/3,5" sas/sata disks or 16 2,5/3,5" sas/sata disks and 8 nvme disks. So the hba(s) you need depend on your choice. But no problem with the expander.
  12. I can confirm you don't need bifurcation to use this card. As mentionned in my post there is "PLX Technology, Inc. PEX 8748" which is a pci-e switch ediit: to be clear my Z620 doesn't support bifurcation either
  13. bought this one works flawlessly in HP Z620. pros: works with 22110 ssd works in x16 x8 and x4 (not verified) slots cons: small fan on the card a bit noisy. IOMMU group 29: [10b5:8748] 04:00.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 30: [10b5:8748] 05:08.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 31: [10b5:8748] 05:09.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 32: [10b5:8748] 05:0a.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 33: [10b5:8748] 05:0b.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 34: [10b5:8748] 05:10.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 35: [10b5:8748] 05:11.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 36: [10b5:8748] 05:12.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 37: [10b5:8748] 05:13.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) IOMMU group 38: [1c5c:2527] 06:00.0 Non-Volatile memory controller: SK hynix Device 2527 [N:0:1:1] disk HFS960GD0TEG-6410A__1 /dev/nvme0n1 960GB IOMMU group 39: [1c5c:2527] 07:00.0 Non-Volatile memory controller: SK hynix Device 2527 [N:1:1:1] disk HFS960GD0TEG-6410A__1 /dev/nvme1n1 960GB IOMMU group 40: [1c5c:2527] 08:00.0 Non-Volatile memory controller: SK hynix Device 2527 [N:2:1:1] disk HFS960GD0TEG-6410A__1 /dev/nvme2n1 960GB IOMMU group 41: [1c5c:2527] 09:00.0 Non-Volatile memory controller: SK hynix Device 2527 [N:3:1:1] disk HFS960GD0TEG-6410A__1 /dev/nvme3n1 960GB
  14. i gave it another try with the latest update and it seems to work now. 😀
  15. i'm in the same boat. Plugin freshly installed and no file is added. i tested several times.
  16. Hello, i set up the container. All is working except for folder structure. Container Variable: folder_structure is set to "album". From what i understood it should had save pics with the same folder structure than icloud. The result is i have all my pics and vidéos in a folder called "album". In icloud i have around 20k files for 62GB. 5% of these files are unclassified. The rest is in albums named accordingly to the date and event name. Theses albums are in folders (one folder per year). Anybody could point me in the right direction ?
  17. if the unassigned disk is an usb one you have to bind mount it to luckybackup docker to the /destination folder. /destination <-> /mnt/disks/yourdiskid/
  18. obviously questions about zfs. I'm a long time user of linux but still a noob. I used proxmox, esxi in the past and unraid since 4 years but i still need some guidance. So i would appreciate some sort of faq about zfs: - pro and cons of zfs pool as main array and as cache of main array - how to choose disk layout - how to best use ssd, how many cache pools - best way to organize backups according to data type (flat files folders, nextcloud, dockers, vm,....) and run it. I'm waiting for stable release to reorganize all my data and my backups.
  19. je n'ai pas essayé avec la v2 mais si tu le fais il faut repartir de zero car les bd ne sont pas compatibles. Telegraf avec l'option outputs influxdbv2 ne peut pas écrire dans une bd influx1.8. Après je me souviens aussi que dans le projet uud le créateur n'a jamais basculé sur la v2 (probablement pour garder sa bd historique). edit: je ne sais pas si on peut monitorer des vm avec prometheus
  20. j'utilise celui de linux server et je n'ai pas trop à m'en plaindre. Il me semblait avoir lu que sur le docker officiel il y avait des problèmes : trop d'écritures ce qui pouvait conduire à une usure prématurée des ssd. Ca a peut être été corrigé depuis.
  21. tu as la réponse dans ton premier post. Tu as configuré dans telegraph outputs infuxdbv2 et tu as installé influxdb 1.8. Ceci dit c'est vraiment prise de tête. Je m'étais fait un beau dashboard à base du projet uud mais je ne le consultais que très peu. Et un jour certainement à la suite d'un update influxdb1,8 a pêté les plombs et saturait mes 32 cores. Depuis j'ai arrêté toute la stack. Je pense peut être essayer celle à base de prometheus qui a l'air plus simple à prendre en main.
  22. i also added a 10Gb sfp+ card yesterday in my server. At first boot card was detected but no link. At second reboot i was able to modifiy it to be eth0 and link was established. There was just bridging and no bonding
  23. i guess you have to trust the maintainer if you use a template. Needed host paths has to be exposed in the template. If not using a template you have to know where the app is writing to map this to host. If beginner i would advise to stay with templated dockers apps.
  24. I think i already written that : a more friendly vm management (no more xml editing), gui for vm snapshots managements Also support for user-friendly docker compose (portainer is a fine gui for managing containers, images and networks) Robust pool for docker. I've been struggling in 6.8.3 with btrfs pool and docker. For that point zfs cache pool might be interesting if docker zfs driver doesn't lead to some write amplification. And last but not least: a simple way to monitor unraid server and it's services. It's already possible with uud project but it's quite an effort to setup and maintain. I dropped it when an update on influxdb docker made my server crazy (influxdb was eating mu cpus).
  25. When buying hdd i'm driven by the price/capacity ratio. When capacity was around 160G-500G it was samsung. With the 1,5TB-2TB generation it was seagate. (ST1500DM001 and ST2000DM001) With 3 and 4 TB generation i started with WD red then seagate ironwolf. With 6TB i mainly used shucked seagate expansion disks. I then stopped upgrading capacity as i found for my needs drive capacity was enough and going bigger was a huge step in term of money. Now i plan a significant upgrade. For now best price/capacity ratio is for 16TB drives (exos x16 or toshiba MG08 sitting around 16€/TB). Here in europe storage prices are higher. In U.S you can have huge deals on entreprise grade hdd or ssd. I already lost hours rebuilding data so unraid or raid i always go for 2 parity drives and array limited to 6 or 8 drives.