Jump to content

SeeGee

Members
  • Posts

    84
  • Joined

  • Last visited

Everything posted by SeeGee

  1. How much is being written? Is it a few bytes, or lots of megs? I'm running beta25, with raid1 ssd pool as well as raid1 nvme pool and I do get frequent writes, it's only a few k (docker runs on nvme pool) . Well within what I expect considering how many containers I have running.
  2. Try lsof command. # lsof +D /mnt/<diskwithactivity> Might take a few mins to complete, but it'll show you what files are open/being used on that drive Could also use: # lsof /dev/<active drive>
  3. If you're capped at Gigabit and only worried about network file storage, your cache and array disks only need to be capable of 120mb+ with regards to anything coming in or out of the network. In this case, raid0 offers no benefit. Not even when running mover, because it can only move files at whatever speed the destination drive can handle. Raid1 offers redundancy. (Safe Bet) Now, when you start including docker containers and virtual machine performance when the data for those features are also stored on the cache, then the speed benefit of raid0 starts to become more appealing. But no safety net if a disk fails. Some people will use an unassigned SSD to run docker/vms and have the best of both worlds
  4. In a parity rebuild operation, all data drives are kinda treated as read only. The only disk where anything is written is the parity drive itself. Your data drives in the array will be untouched.
  5. Started messing around with multiple pools on 6.9-beta25. Moved unassigned nvme to secondary pool, and added second nvme to said pool, and converted/balanced to RAID1 without issue. I use this pool for all things running on the server (docker, vms, system) and use the original cache pool for the shares on the array. Thank you very much for your efforts on this feature. It has improved my experience with unraid substantially. It is a feature I needed before I even knew I needed it. A pat on the back for the Limetech team.
  6. @JorgeB I see your help on this thread is also relevant here. Thank you for your information. In BOTH places.
  7. Is it the serial, or the uuid? Either way, unraid is smart enough to order the array drives as they were/should be, even if the device has a new /dev/sdX However, if you are using the "create new config" option, then you might want to make note of the order of assignment beforehand.
  8. I recently installed 6.9-beta25 to check out the multiple cache pool feature. I am currently using a 1tb nvme as an unassigned device for my containers and vms. It's an encrypted Btrfs partition. I bought a second (identical) drive, and would like to use them together as a raid1 cache pool. Can I simply create a new pool, assign both drives to the newly created pool and fire it up without having to format the original nvme again? Is it better to assign the one drive first, then add the second afterwards.? I understand that I will need to update the container and vm pathing of course. And I'm assuming that running a balance will probably be required.
  9. I have personally verified that both the Asus Hyper M.2 X16 (V2) and (Gen4) cards work on a Supermicro X10DRU-i Motherboard. Just be aware that the Gen 4 version is 2.6 cm taller, and thus wouldn't fit in my chassis using the slot I wanted.
  10. Im still having these issues. Anyone have any ideas?
  11. I've recently picked up a Dual Xeon e5-2690 v3 server, and I appear to be having fairly poor NUMA memory allocation. These CPUs are a little funny, as they utilize a topology called "Cluster-on-Die". Each CPU has 12 cores, but those cores are divided into two distinct NUMA Nodes, each with access to their own memory controller. There was a whitepaper from NASA that covered this topology in great detail [here]. Super helpful read if you would like to know more about these processors! Here's a diagram from that article which shows the exact topology of these CPU dies. Each bi-directional "ring" is it's own NUMA Node with 6 cores associated: The output from numactl --hardware: # numactl --hardware available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 24 25 26 27 28 29 node 0 size: 32124 MB node 0 free: 197 MB node 1 cpus: 6 7 8 9 10 11 30 31 32 33 34 35 node 1 size: 32253 MB node 1 free: 65 MB node 2 cpus: 12 13 14 15 16 17 36 37 38 39 40 41 node 2 size: 32253 MB node 2 free: 165 MB node 3 cpus: 18 19 20 21 22 23 42 43 44 45 46 47 node 3 size: 32253 MB node 3 free: 130 MB node distances: node 0 1 2 3 0: 10 11 21 21 1: 11 10 21 21 2: 21 21 10 11 3: 21 21 11 10 This is what I get from lstopo: And this is what I'm getting from numastat after 22 hours of uptime: # numastat -n Per-node numastat info (in MBs): Node 0 Node 1 Node 2 Node 3 --------------- --------------- --------------- --------------- Numa_Hit 2288524.80 1739432.91 1395431.20 815033.61 Numa_Miss 662912.27 1059782.66 983562.46 1219333.84 Numa_Foreign 2014864.54 942640.61 651756.69 316329.40 Interleave_Hit 196.36 196.60 196.48 196.50 Local_Node 2288502.47 1739217.54 1395220.26 814821.59 Other_Node 662934.60 1059998.02 983773.40 1219545.87 Total --------------- Numa_Hit 6238422.52 Numa_Miss 3925591.23 Numa_Foreign 3925591.23 Interleave_Hit 785.94 Local_Node 6237761.86 Other_Node 3926251.89 I have also used the Intel Memory Latency Checker to show a better understanding of how these NUMA Nodes interact with each other: # ./mlc Intel(R) Memory Latency Checker - v3.8 Measuring idle latencies (in ns)... Numa node Numa node 0 1 2 3 0 76.7 159.8 198.1 208.2 1 150.2 79.6 194.1 203.6 2 195.7 205.3 77.6 161.9 3 192.5 203.5 149.3 79.3 Measuring Peak Injection Memory Bandwidths for the system Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec) Using all the threads from each core if Hyper-threading is enabled Using traffic with the following read-write ratios ALL Reads : 111205.5 3:1 Reads-Writes : 107675.1 2:1 Reads-Writes : 108332.7 1:1 Reads-Writes : 91567.6 Stream-triad like: 104022.0 Measuring Memory Bandwidths between nodes within system Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec) Using all the threads from each core if Hyper-threading is enabled Using Read-only traffic type Numa node Numa node 0 1 2 3 0 31591.2 18212.4 14887.9 14160.6 1 18748.3 31443.5 14796.8 14171.0 2 14965.8 14210.9 30433.0 18081.5 3 14961.6 14270.1 18111.1 31309.2 Measuring Loaded Latencies for the system Using all the threads from each core if Hyper-threading is enabled Using Read-only traffic type Inject Latency Bandwidth Delay (ns) MB/sec ========================== 00000 310.44 101637.3 00002 301.04 101744.6 00008 322.89 102328.3 00015 333.16 103578.6 00050 292.31 105430.2 00100 252.14 102857.1 00200 190.95 79047.3 00300 139.05 57093.7 00400 149.08 43520.2 00500 128.57 35536.9 00700 139.25 25450.3 01000 126.08 18124.3 01300 124.63 13950.7 01700 126.16 10748.6 02500 121.35 7475.3 03500 123.91 5374.5 05000 126.66 3895.5 09000 117.32 2418.6 20000 128.93 1341.4 Measuring cache-to-cache transfer latency (in ns)... Local Socket L2->L2 HIT latency 28.5 Local Socket L2->L2 HITM latency 31.8 Remote Socket L2->L2 HITM latency (data address homed in writer socket) Reader Numa Node Writer Numa Node 0 1 2 3 0 - 56.6 93.1 95.2 1 107.1 - 124.2 109.7 2 153.7 141.3 - 85.4 3 130.8 113.1 63.0 - Remote Socket L2->L2 HITM latency (data address homed in reader socket) Reader Numa Node Writer Numa Node 0 1 2 3 0 - 95.6 104.3 108.9 1 101.6 - 149.2 160.7 2 123.4 133.5 - 78.3 3 150.4 134.5 88.5 - This is my first system that actually implements NUMA Nodes, and I have done a lot of research into what NUMA Nodes are, and how they function, but I am at a loss why I am getting so many numa_miss and numa_foreign results. At this point I have many containers running, but no VM's active at this time. I am hoping to improve the NUMA performance within unRAID itself before I start trying to tackle VM NUMA configuration. Any advice would be greatly appreciated.
  12. Excellent. Thanks! (Edit) Update: I managed to do the migration, but it didn't go quite as smoothly as I had liked. I removed the vfio-pci.cfg file, but the system would hard lock right after the samba line during boot. Fortunately I have been documenting my changes and customization tweaks as I go, and I realized that acs override was specified in the boot line, so I took that out as well, and from that point on, everything seemed to go fairly smoothly. I'm happy to say that it's running along nicely, as though nothing changed. All containers started, all VMs (not using a pass-through GPU) are running fine. I am now looking into fine tuning the server for vm performance (CPU pinning/Isolation, NUMA/Memory allocation, etc). I've watched SpaceinvaderOne's Server tuning videos, and they were a great help with pointing me in the right direction. That conversation will be a different post. Thank you again.
  13. I bought a Supermicro 6028U-TR4T+ (2U) Server, and will be migrating my unraid installation from old consumer grade hardware (i7-2600) to this new machine. Chassis: Supermicro 6028U-TR4T+ Motherboard: Supermicro X10DRU-i+ CPU: (x2) Intel Xeon e5-2690 v3 RAM: 128G DDR4-2400 ECC (8x16G) SAS Controller: LSI 9311-8i (IT Mode) Backplane: Supermicro BPN-SAS3-826EL1 (12x 3.5" HDD) Drives: 8x WD 6TB, 4x WD 8TB (Both White Label) Cache: Samsung 1TB 860 Evo SSD (Sata) Network: 4x 10GbE (Intel x540 Chipset) What tips can you offer to prepare my installation for the move to the new hardware? Currently PCIe Pass-through configured for GPU use with a VM, so I will obviously need to remove the entries in /boot/config/vfio-pci.cfg beforehand. I assume I will also need to make note of the order of Disks in the Array. Are there any issues using the Intel c612 in AHCI mode for my cache disk? I'm sure I will have other questions when I actually do the migration, but that's about it for now.
  14. In the README.md file on Github, it says: "If there are multiple ovpn files then please delete the ones you don't want to use (normally filename follows location of the endpoint) leaving just a single ovpn file and the certificates referenced in the ovpn file (certificates will normally have a crt and/or pem extension)." What happens if there are multiple config files? Will it choose at random/round Robin? I use your qbittorrentvpn container, and I somehow recall a mention in this container that it would choose one randomly. The reason why I ask, is that I would like to randomize the servers I connect to.
  15. Just an update: The drives are installed, preclear passed, and parity is currently rebuilding. Thank you for your help.
  16. They are brand new, shucked wd 6tb. Your insight has been helpful, and I know how I'm going to do this: I'm going to migrate data, create a new array config using only disks I'm keeping, and rebuild parity. While parity is rebuilding, I can start preclear on all new drives that I have installed. One new drives pass preclear, I'll add them to the array.
  17. Thank you. This gets me headed in the right direction. It looks like simply : moving the data off the disks i wish to remove onto disks I'm keeping, then creating a new config with the remaining disks as well as the new disks, then rebuild parity is the way to go. Is it better to rebuild parity before adding the new, blank disks?
  18. I have no issues migrating all the data off the drives i wish to remove from the array. In fact, i've already moved the majority of it. What I wanted clarification on was more a question of "What changes when you create a new config?" If I create a new config, will it format/erase data on drives from the old array? (I understand new drives will need to be formatted) If I create a new config, will I need to re-create my users/shares/permissions/docker containers? If shares are not modified, what happens with disk include/exclude lists? Thank you for any insight you can provide!
  19. I need to swap out some 500G drives with four 6tb drives. These are not hot-swappable. I understand that simply replacing one drive at a time and letting it rebuild is the recommended way, but I'm hoping to get it done in a single sweep to minimize downtime. What's the best course of action here? I understand that I can also simply reset the array config, set up the array from scratch, but will this destroy any existing data or shares? A parity rebuild will be necessary if I use this method I think...
  20. Binhex just answered this a couple of posts up: see Q2 for how to confirm the vpn tunnel is working:- https://github.com/binhex/documentation/blob/master/docker/faq/delugevpn.md
  21. I am experiencing a similar error using a different VPN Provider... When I start the container, I get this warning, but when i checked the credentials conf, perms are set to 600. Obviously doesn't affect functionality, but I'd rather have it happy. 2020-02-19 16:08:52,946 DEBG 'start-script' stdout output: Wed Feb 19 16:08:52 2020 WARNING: file 'credentials.conf' is group or others accessible And I also get the AEAD Decrypt error: cipher final failed, but not right away... it only seems to happen after the container has been running for a few hours. It shows up sporadically. I'm pretty certain that the connection is successful, because the logs say: [info] Successfully retrieved external IP address xxx.yyy.zzz.111 which is NOT my actual external IP To verify the VPN was working, I dumped that IP into a geo-location lookup, and it returned a location far, far, from home. One non-standard change I made within the qbittorrent Web UI: I changed the Network Interface under Advanced Settings from default (i think that was eth0) to tun0 under the impression that this guarantees a killswitch of sorts... Zero traffic allowed without the VPN being active. Correct?
  22. I'm running an older SandyBridge i7-2600, and I am unable to get Quick Sync enabled. I have verified that it is in fact ENABLED in the bios. My motherboard is an odd one (MSI-P67A-GD65) that has no onboard HDMI ports of any kind (Even though every cpu it supports appears to have an iGPU on it) but the bios explicitly has an option to enable/disable Quick Sync. This lack of HDMI port makes it impossible to try the dummy-dongle trick. When i run modprobe i915, it executes without any output, the module loads, but I get no /dev/dri. Is there a way to force this? I know my cpu is old, and it's quicksync support is crappy, but it's one of those things... If you can configure the software to use it, why not? (unRAID 6.8.2)
  23. SeeGee

    6600K Usefull?

    For your load, it shouldn't be a problem. I'm running an i7-2600 with 8 sata 6gbps drives and several sata 3gbps drives attatched. I use an nvidia GTX 760 in my win 10 vm. My hardware is notably older than what you're proposing, and I have no issues. If you already have the hardware, I'd say fire it up! You do get a 30 day trial!
  24. I have a lsi-9211-8i and one day I was monkeying around with the server and went to pull out the card right after I shutdown the server. It was so hot I could barely touch it. I know these cards get warm, but this was hot enough to be concerning. I had a Sunon 40x25mm mag lev fan handy and was able to jury rig it onto the heatsink using a couple of long thin screws, and that seemed to have solved the problem. Anytime I touch it now it's barely warm. The fan fits 'almost' perfectly.
  25. I am using the binhex-qbittorrent docker, which works very well. I use the following structure: /Torrents/Incoming/ = For incomplete torrents. /Torrents/Seeding/ = For compete torrents. qbittorent is configured to automatically move completed torrents from ./Incoming to ./Seeding once they are done downloading. In an effort to minimize thrashing of the mechanical disks, I would like to have the ./Incoming dir stay on the cache (SSD) but still allow for overflow to the HDDs, and have the ./Seeding dir stay ONLY on HDDs. If its downloading, stay on the cache (unless it's full), but once it's complete move it completely off of the cache drive... I guess this could be achieved if I make separate shares and use the cache options in each share, but can it be done within an existing share more transparently? Could symlinks be the answer?
×
×
  • Create New...