tourist

Members
  • Posts

    11
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

tourist's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Much thanks for creating this, it's been very helpful. Question: what about the 'Sign Up To Cloud' button in the upper-right? I clicked it, gave an email, got an email taking me to directions for all the ways to connect my systems to netdata's cloud storage. One of them was for Docker, and the main difference seemed to be that it specified a base 64 rando identity param named NETDATA_CLAIM_TOKEN, and another NETDATA_CLAIM_URL=https://app.netdata.cloud So I edited my netdata-glibc config and added those at the end of Extra Parameters- --runtime=nvidia --cap-add SYS_PTRACE --security-opt apparmor=unconfined -e NETDATA_CLAIM_TOKEN=rando-string-goes-here -e NETDATA_CLAIM_URL=https://app.netdata.cloud Clicked apply, container restarts, however the 'Sign Up To Cloud' button doesn't disappear, and following signin at https://app.netdata.cloud/spaces it says I have no nodes. Has anybody else figured out how to link netdata-glibc with netdata.cloud?
  2. The coolest thing about ZFS is it can create virtual block devices. What I ended up doing was using the 2nd form of `zfs create`, which creates a zfs dataset at a virtual block device /dev location- zfs create -s -V 1T -o volblocksize=4096 -o compression=lz4 mypool/vmvol This created `/dev/zd0`. Then I created a new VM, Ubuntu Server 20, using that /dev/zd0 over SATA. It performs well, for larger block sizes. For database writes of batches of large rowsets (20 MB, 100K rows), it's quite fast, ~450 MB/sec over time, as UNRAID's write cache becomes exhausted. When I need to expand the ZFS pool, I can add new drives, once raidz drive expansion hits production. UNRAID has been good to me. If I built separate machines, 3x more money spent. If I virtualized in AWS, $3k per month (AWS estimate). It paid for itself in two months.
  3. I ran a few more tests, at bigger block sizes, and got reasonable performance. Bigger blocksize = better performance, right up to the range of what my tiny ZFS array is capable of. The app for this particular VM is gonna be batch writing big chunks of rows to MariaDB tables, so now I think it'll be fine. (I know that the bigger MB & GB/s figures are due partly to write caching. In longer dd tests (500GB) it averaged out to near the native write performance.) I'll continue benchmarking just for the heck of it. ps. I did allocate a 2.5 Gbps ethernet adapter to this VM, but haven't set it up yet. I take it this would be done using ISCSI, targetcli etc? I wonder if performance would be any better. $ dd if=/dev/zero of=./test99_bs8K_c500K.dat bs=8K count=500K oflag=direct status=progress 4194304000 bytes (4.2 GB, 3.9 GiB) copied, 86.478 s, 48.5 MB/s $ dd if=/dev/zero of=./test99_bs16K_c250K.dat bs=16K count=250K oflag=direct status=progress 4194304000 bytes (4.2 GB, 3.9 GiB) copied, 49.7929 s, 84.2 MB/s $ dd if=/dev/zero of=./test99_bs32K_c125K.dat bs=32K count=125K oflag=direct status=progress 4194304000 bytes (4.2 GB, 3.9 GiB) copied, 32.96 s, 127 MB/s $ dd if=/dev/zero of=./test99_bs64K_c62.5K.dat bs=64K count=62500 oflag=direct status=progress 4096000000 bytes (4.1 GB, 3.8 GiB) copied, 22.2345 s, 199 MB/s $ dd if=/dev/zero of=./test99_bs128K_c31.25K.dat bs=128K count=31250 oflag=direct status=progress 4096000000 bytes (4.1 GB, 3.8 GiB) copied, 12.7734 s, 321 MB/s $ dd if=/dev/zero of=./test99_bs256K_c15625.dat bs=256K count=15625 oflag=direct status=progress 4096000000 bytes (4.1 GB, 3.8 GiB) copied, 10.2312 s, 400 MB/s $ dd if=/dev/zero of=./test99_bs512K_c7813.dat bs=512K count=7813 oflag=direct status=progress 4096262144 bytes (4.1 GB, 3.8 GiB) copied, 4.02221 s, 1.0 GB/s $ dd if=/dev/zero of=./test99_bs1M_c3906.dat bs=1M count=3906 oflag=direct status=progress 4095737856 bytes (4.1 GB, 3.8 GiB) copied, 3.28692 s, 1.2 GB/s
  4. Pass through a physical NIC from the unraid host to a VM? That will bring faster transfer from a ZFS dataset on unraid with the VM? Unobvious [to newb me], so why not, I'll try it.
  5. Hullo.. I'm about one year in with unraid on a system I built to run VMs and Docker apps (AMD 3950X, MSI X570, 128 GB RAM, 4 TB SSD array, 2x RTX 2070 SUPERs, GeForce GT 710). Really enjoying unraid and CA. I needed more storage, so I bought 4x Seagate X18 18TB drives, figuring I'd stripe them with ZFS or RAID. I tested the new drives a bit, first, formatting them ext4 on unraid cli (~280 MB/sec w/1M writes to a single drive, ~same for reads -both via dd with iflag/oflag=direct, clearing unraid's filesystem cache before the read test). Then I tested with an Ubuntu VM, passing the device in as a secondary drive, with raw and VirtIO, and got 245 MB/sec writes, about the same for reads. I thought, "Okay, that's close enough to spec. Maybe I could run them in a raidz2 pool and find a method for passing through a zfs dataset that's performant." So I created a raidz2 pool from the four drives. With caching disabled, dd if=/dev/zero of=./test.dat bs=1M count=40000 oflag=direct which writes at ~404 MB/s, reads ~496 MB/s. Not bad! In the same pool I created a 1 TB /dev/zvol block device, then created a Ubuntu 20 VM using the new block device as its primary, /dev/zd0, with VirtIO. Performance was abysmal. Writes ~35 MB/s, reads ~37 MB/s (bs=1M count=40000 oflag=direct). I tried accessing another dataset on the same zpool, via SMB, from another Ubuntu VM on the unraid host, and got writes at 20 MB/s, reads 18 MB/s (again bs=1M etc). Is there a better way? (What causes such slowness??) I know about passing in the /dev/sd* drives to let a guest VM create its own zpool, but I wanted to share the darn thing across VMs and not keep it in just the one VM. I've seen the idea of running TrueNAS in a VM -I imagine it'll be quick for that VM, because it'd be passing the devices through, but wouldn't peer VMs suffer the same lag as my previous efforts? Is it worth going down that path? What's your best method and result for sharing ZFS across VMs?
  6. My F@H docker image was running just fine... then after a reboot the container starts, but the web console times out (http://10.0.0.52:7396/) The netdata plugin shows that nothing much is happening on the GPUs or CPU, so I guess that my F@H somehow got broken, and is pretty silent about why. Stopping and starting again doesn't change. GPU IDs remain the same as before the reboot. No errors about GPUs or drivers, and the GPUs are recognized. Latest logs here: https://pastebin.com/g1vm41Qy I've scanned this thread, and upgraded to the latest, but no change in behavior. I've looked for logs besides those produced directly by the F@H control client, under /var/log, but nothing interesting. Posting here in the hope that one of you recognizes the pattern and can nudge me in the right direction. Cheers.
  7. Okay I noticed my first post was [unexpectedly] under the subforum of the last post I'd read, and deleted it. What would be the correct subforum?
  8. This week I installed Unraid onto a USB flash drive (for 30-day trial), and proceeded to configure and customize it to work with my drives and GPUs. I was iterating on changes to Main->Flash->Syslinux to get two NVidia RTX 2080 SUPERs and a cheapie GeForce GT 710 working properly, in anticipation of passthru to VMs. This morning I rebooted 3-4 times to test the changes. After one of the reboots, I ssh'd in and discovered that customizations I'd made and files downloaded under /root were just ... gone! Likewise, files created under a new user's home directory were missing. But, the changes I'd made to syslinux.cfg via the web gui were intact. Also intact were the array definition and CA plugins I'd installed. I am fairly certain that this outcome isn't connected to the things I've touched so far. I'm a software and devops engineer, been admining Linux and Unix-like systems for more than 25 years now, so I'm aware of pitfalls, and watch carefully for potential consequences of my actions. Is this a known issue, or an expected behavior? I've seen a few other posts with similar titles. Does Unraid go and nuke things it doesn't know about? For the record, I had made a flash backup, about 48 hours ago. But, I would have expected to use it only in case of hardware catastrophe. For this experimental outing with Unraid, I've allocated high-quality, high-speed equipment. The flash device is a brand-new Sandisk Extreme, 300 MB/sec, erased and tested overnight before installing Unraid on it. Any and all insights appreciated.
  9. This week I installed Unraid onto a flash drive (on 30-day trial), and proceeded to configure and customize, to work with my drives and GPUs. I was iterating on changes to Main->Flash->Syslinux to get two NVidia RTX 2080 SUPERs and a cheapie GeForce GT 710 working properly, in anticipation of passthru to VMs. This morning I rebooted 3-4 times to test the changes. After one of the reboots, I ssh'd in and discovered that customizations I'd made and files downloaded under /root were just ... gone! Likewise, files created under a new user's home directory were missing. But, the changes I'd made to syslinux.cfg via the web gui were intact. Also intact were the array definition and CA plugins I'd installed. I am fairly certain that this outcome isn't connected to the things I've touched so far. I'm a software and devops engineer, been admining Linux and Unix-like systems for more than 25 years now, so I'm aware of pitfalls, and watch carefully for potential consequences of my actions. Is this a known issue, or an expected behavior? I've seen a few other posts with similar titles. Does Unraid go and nuke things it doesn't know about? For the record, I had made a flash backup, about 48 hours ago. But, I would have expected to use it only in case of hardware catastrophe. For this experimental outing with Unraid, I've allocated high-quality, high-speed equipment. The flash device is a brand-new Sandisk Extreme, 300 MB/sec, erased and tested overnight before installing Unraid on it. Any and all insights appreciated.
  10. Ta. A bit of whinge: I got here coz I needed to mount an exFat USB stick. I looked for a plugin, found SNAP, learned it's deprecated -OK. Found it's replaced by Unassigned Devices, looked for plugin and found broken links to .plg files. Looked some more and found an old support thread pointing to this new thread (suggestion: archive the old pages, or hide from search engine spiders, or delete them. A newbie shouldn't have to chase through this many stale links and still not find the location of plugins.) Dumb question: why isn't there a link to the latest list of current plugins right on the UNRAID Plugins tab/page?