Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Theoretically it should work. How long ago did you setup the CNAME? It can take up to 48 hours to get updated and propagated. Try using google DNS server 8.8.8.8 since it's theoretically the fastest to be updated with google own domain settings. Having CNAME of your own domain to point to duckdns is probably not a good idea, to be honest. Can you just use Google own DDNS instead (assuming you don't have a static IP)? In the DNS page of the Google Domains website, look for Synthetic records, add a Dynamic DNS line which will generate a subdomain e.g. subdomain.example.com + username + password. Then use that to configure your router DDNS settings. Then point CNAME to the DDNS subdomain e.g. server -> subdomain.example.com. With a static IP then obviously use A record.
  2. A few things that come to mind: Create a new docker called nextcloud-fast with the same config but bridge to the fast network and use it when you need fast speed to next cloud (use the fast network IP to access). Since the underlying storage of your nextcloud is still the same unraid server, you can just use your workstation to transfer files over to the same storage location independent of nextcloud. If I were you I would use (2).
  3. Would the proposal below work better as a middle ground? Having key generation based on a user-selectable "nominated device", which can be either USB stick - use the GUID - default selection - easy no-effort option for majority of users or HDD / SSD - use model + serial number This will require minimal additional work (a few GUI boxes + additional function to generate a pseudo GUID based on HDD / SSD model + serial). No change to existing license process and key replacement etc. No change to piracy risk (I believe the model + serial number is unique to a storage device) Better reliability for the users who require it
  4. GTX 1650 should work. You should attach diagnostics. Tools -> Diagnostics -> attach full zip file.
  5. In the current beta, the VFIO-PCI.CFG plugin has been integrated into Unraid. So instead of binding using the usual vfio-ids method in syslinux or manually editing the VFIO-PCI.CFG file (manually or through the plugin), you can now do that on the Unraid native GUI via Tools -> System Devices. You just tick the boxes next to the devices you want to bind for VM pass-through and apply and reboot. What the passage means is in addition to the usual devices that you would need to bind (e.g. USB controller, NVMe SSD etc.), you should also bind the graphic card that you intent to pass through to the VM as well. That has not been required in the past and is not required until Nvidia / AMD driver is baked in. Better do it now rather than "my VM stops working" in the future. Right now though, there is no implication since there's no Nvidia / AMD driver included (yet). Side note: if the syslinux method has been working for you then you don't really need to use the new GUI method. Just need to take note to add the graphic card device ID to syslinux as well.
  6. Using custom BIOS for Windows bare metal is already iffy affair. Adding VM variable to it is just asking for trouble to be honest.
  7. What do you guys use for a quick GUI-based check on pool information e.g. available space, snapshot sizes etc? I can see stuff in the command line but wondering if there's something like the Main page of Unraid but for ZFS.
  8. @steini84: if I don't need to send backup remotely, there wouldn't be any benefit of using Sanoid over Znapzend, would it? I like the config file of sanoid but the part about needing to run sanoid every minute seems excessive in my mind. If none of my schedule is more frequent than hourly, can I just cron it for every 30 minutes instead?
  9. A few ways to resolve that. A VM can be set to auto start. Failing that, you can use the user script plugin to keep on trying to start the VM after some delays. Failing that, you can boot Unraid in GUI mode but you need a graphic card dedicated to Unraid; otherwise, once you start the VM, the Unraid display will disappear and never come back without a reboot. Failing that, you can access the GUI using your phone. You still, however, need the GUI to do the advanced configuration required initially for the VM so you probably still need another device on the network to access the GUI, preferably with a comfortable keyboard.
  10. Nextcloud is generally the go-to in terms of hosting a personal cloud for family members. Note that once you expose nextcloud to the Internet, getting the attention of a hacker is a sooner or later event so have good backup strategy and keep your nextcloud updated would be prudent. It's like living above a pub, sooner or later a drunk guy is gonna think your apartment door is the toilet door so you want it locked properly or he's gonna go in and pee in your place. The alternative is to setup a VPN server but that require your F&F to be constantly connected to the VPN to access the files, something most people find very annoying to do.
  11. A few pointers: If you don't have any (fast) device as cache then managing shares will have to be done through the smb-extra config file. With a cache disk, you can create shares and then use symlinks to your zfs mounts / folders etc. and it would be transparent. Note that any zfs filesystem shared in smb will need a line in /etc/mtab to prevent spurious quota warning spam in syslog. Some echo into /etc/mtab at array start will do. echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab I had VM's using qcow2 image that had a rather strange issue. It would boot up originally fine but if shutdown and start back up, it would hang libvirt completely. So I now switched to using zvol instead (use qemu-img convert to convert from qcow2 to raw). Note that there's a bug that causes the zvol mounts to not show up which may or may not be solved by zvol export and then import after the initial import at boot. Without the zvol mounts, you have to use /dev/zd# which can change after 1st reboot (but won't change again without additional changes to the volumes) so I recommend doing a reboot after making changes to volume to solidify the zd# devices + you might want to use slightly different volume sizes between VM so it's easier to identify which is which.
  12. You need to elaborate. What sort of access you are talking about?
  13. It depends on your hardware config. Typically if your hardware PCIe hardware pass-through and you can live with a VM then a VM would be a lot more convenient. I have been using my daily driver (main workstation) as a VM for years now. Having to reboot to bare metal is just a pain in the backside.
  14. Search for "unexpected GSO" on the LT release note for fix i.e. change from virtio to virtio-net. Note though that virtio-net can carry some performance penalty so if you find it too slow, try changing the machine type to 5.0 version (presumably yours is still 4.2). I don't have any more GSO error with Q35-5.0 and i440fx-5.0 evne with virtio.
  15. Doable with a caveat. You need at least 1 device in the array. That's a requirement. Some were even able to make do with just an additional USB stick (in addition to the Unraid OS stick). I was able to move everything (docker, libvirt, appdata, vdisk, daily network driver) to my zfs pool of 2 NVMe - half in RAID-0 and half in RAID-1. My array currently has a single 10TB spinner that is spun down most of the time. I only use it as emergency temp space (e.g. when upgrading my SSD backup btrfs raid-5 pool).
  16. 6.9.0-beta25已经发布了。你们可以试一试。
  17. It works better in Windows bare metal because Windows is able to reserve the RAM directly for the host-managed-buffer (HMB) to work. A VM vdisk doesn't have that luxury because it is emulated; hence, it doesn't really work that well i.e. the SSD works like a truly DRAM-less version. If you have another storage device in the array (so that you can take the NVMe out of the array) + pass through the NVMe as a PCIe device to the VM then hopefully that would work better as presumably HMB will then be detected.
  18. "when downloading a file" <-- torrent? sab? web? Your NVMe (ADATA SX6000) looks to be a DRAM-less model so it wouldn't be the best under heavy load e.g. torrenting with Unraid. Also with your server, you might want to add a storage device to array and put the SSD on cache pool instead. Array doesn't support trim, which will exacerbate the deficiency of the Adata SSD.
  19. Attach diagnostics (Tools -> Diagnostics -> attach entire zip file). Also you need a detail description of the problem e.g. does it happen all the time? Does it happen during load? which load? anything happening on the server at the same time? etc.
  20. Long overdue updates: I am so happy with the Optane performance that I added another one. This time it's the same 905p but 380GB 22110 M.2 form factor. I put it in the same Asus Hyper M.2 adapter / splitter so it's now fully populated (and used exclusively for the workstation VM). My workstation VM now has 380GB Optane boot + 2x 960GB Optane working drives + 2x 3.84TB PM983 storage + 2TB 970 Evo temp. Finally bought a Quadro P2000 for hardware transcoding. Had some driver issues with didn't agree with Plex so I spent a few days migrated to Jellyfin and then Plex issue was fixed. 😅. I still decided to maintain both Plex and Jellyfin. The former is for local network (mainly because I already paid for Plex lifetime membership) and the latter for remote access (because Jellyfin users are managed on my server instead of through 3rd party like Plex). And talking about remote access, finally come to setting up letsencrypt to allow some remote access while on holiday e.g. watch media, read comics etc. Had to pay my ISP for this remote access privilege but it's not too bad. Resisted checking out 6.9.0 beta for quite sometime and then noticed beta22 enables multiple pools so I made the jump only to open the zfs can of worms. 😆 So it started with the unraid nvidia custom build has a aforementioned driver clash with Plex. That forces me to look around a bit and noticed ich777 custom version which has a later driver. He also built zfs + nvidia versions which I decided to pick just out of curiosity. My original idea was to set the 2x Intel 750 in a RAID-0 btrfs pool as my daily network-based driver. That wasn't the ideal thing though since I have some stuff that I want fast NVMe speed but not the RAID-0 risks. So after some reading, I found out that zfs pool is created based on partitions instead of full disks (in fact, the zpool create on a /dev/something will create 2 partitions, p1 is BF01 (Zfs Solaris/Apple) + p9 is 8MB BF07 (Solaris reserve) with only the BF01 used in the pool). So then came the grand plan. Run zpool create on the Intel 750 NVMe just to set up p9 correctly, just to be safe. Run gdisk to delete p1 and split into 3 partitions. 512GB + 512GB + the rest (about 93GB). Zpool p1 on each 750 in RAID 0 -> 1TB striped Zpool p2 on each 750 in RAID 1 mirror -> 0.5TB mirror Zpool p3 on each 750 in RAID 1 mirror -> 90+GB mirror Leave p9 alone So I now have a fast daily network driver (p1 striped), a safe daily network driver (p2 mirror e.g. for vdisks, docker, appdata etc.) and a document mirror (p3). I then use znapzend to create snapshots automatically. Some tips with zfs - cuz it wasn't that smooth sailing. It's quite appropriate that the zfs plugin is marked as for expert uses only in the CA store. I specifically use by-id method to point to the partitions. I avoid using /dev/sd method since the code can change. Sharing zfs mounts on SMB causes spamming of sys_get_quota warnings because SMB tries to read quota information that is missing in /etc/mtab. This is because zfs import manages mounts outside of /etc/fstab (which creates entries in /etc/mtab). The solution is pretty simple by echoing mount lines into /etc/mtab for each filesystem that is exposed to SMB, even through symlinks echo "[pool]/[filesystem] /mnt/[pool]/[filesystem] zfs rw,default 0 0" >> /etc/mtab For whatever reasons, qcow2 image on the zfs filesystem + my VM config = libvirt hanging + zfs unable to destroy the vdisk filesystem. After half a day of trouble shooting and trying out various things, my current solution is to create volume instead of filesystem (-V to create volume, -s to make it thin provisioned). That would automatically create a matching /dev/zd# (zd instead of sd, starting with zd0, zd16, zd32 i.e. increase by 1 hex for each new volume, don't ask me why) that you can mount in the VM as a block device through virtio (just like you would do to "pass through" storage by ata-id). You then need to use qemu-img convert to convert your vdisk file directly into /dev/zd# (target raw format) and voila you have a clone of your vdisk in the zfs volume. Just have to make sure the volume size you create matches the vdisk size. Note that you might want to change cache = none and discard = unmap in the VM xml. The former is recommended but I don't know why. The latter is to enable trim. Presumably destroying a volume will change subsequent zd# codes, requiring changes to the xml. I don't have enough VM for it to be a problem and I also don't expect to destroy volumes often. This is a good way to build snapshot and compression capabilities for OS / filesystem that doesn't support it natively. For compression, there should be somewhat better performance as it's done on the host (with all cores exposed) instead of limited to the cores assigned to the VM. Copying a huge amount of data between filesystems in the same pool using console rsync seems to crash zfs - indefinitely hanging that requires a reboot to get access back. Don't know why. Doing it through smb is fine so far so something is kinda peculiar there. Doesn't affect me that much (only discovered this when trying to clone appdata and vdisk between 2 file systems using rsync). You can use the new 6.9.0 beta feature of using folder as docker mount on the zfs filesystem. It works fine for me with a major annoyance that it would create a massive number of children filesystems required for docker. It makes zfs list very annoying to read so after using it for a day, I moved back to just have a docker image file. I create a filesystem for the appdata of each group of similar dockers. This is to simplify snapshots while still allowing me to have some degrees of freedom in defining snapshot schedules. Turning on compression improves speed but with caveats: It only improves speed with highly compressible data. e.g. reading a file created by dd from /dev/null is 4.5TB/s (write speed was 1.2TB/s) For highly incompressible stuff (e.g. archives, videos, etc.), it actually has a speed penalty, very small with lz4 but there's a penalty. You definitely want to create more filesystems instead of just subfolders to manage compression accordingly. gzip-9 is a fun experiment to hang your server during any IO. When people say lz4 is the best compromise, it's actually true so just stick to that. Future consideration: I'm considering getting another PM983 to create a raidz1 pool in the host + create a volume to mount as virtio volume. That will give me snapshot + raid5 + compression to use in Windows. Not sure about performance so may want to test it out.
  21. Anyone has this display bug? The ZFS Info section eats into the System Info and Mellanox section (DEDUP HEALTH ALTROOT columns). The "No known data errors" status etc has some strange line breaks too. I use the nvidia + zfs beta version.
  22. Define "good". Plex + Ryzen = software transcode so it depends a lot on the actual Ryzen CPU. Generally, Quicksync hw transcode will beat consumer Ryzen sw transcode on the same generation.
  23. Where is the 1MiB offset option located? I remember seeing an option in the GUI to pick partition offset in the past but can't see it anymore for some reasons.
×
×
  • Create New...