Jump to content

Xaero

Members
  • Content Count

    239
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Xaero

  1. Xaero

    10gbit speeds?

    What motherboard? Which slot on the board? If it's consumer hardware, most likely one or more of the slots is gimped since consumer CPUs don't have enough pcie lanes to provide full, current gen lanes to each slot. Also the reason we test with iperf is to more or less rule out OS tuning and configuration. If it works with iperf the hardware is fine. If it DOESN'T work with iperf something in the hardware is usually the bottleneck.
  2. Okay so here's the deal; I have a few not super tech savvy people that need a reliable way to upload several terabytes worth of disk images to my server for off-site backup of assets. I've tried using WireGuard, but sometimes I add a user or remove a user and all of the users before or after the user will become unusable and I have to do a lot of manual prodding to keep things in order. I want things to be less involved for me, while still being easy to use for them. So I set up Owncloud - and added my shares as external storage. "This is awesome!" I thought; until I realized that with WebDAV we are limited to ~4gb no matter what, and with the client we have to sync the entire directory and have no option to mount the share as a local disk. The WebUI works but isn't really suitable for the workflow. Okay, so I did a bit of Googling and found out that Owncloud basically became Nextcloud and Nexcloud has an experimental disk mount option! FANTASTIC! Wrong again. The disk mount is based on DokanFS and crashes when attempting to open external share folders. DokanFS has been a thorn in my side before. It's not pleasant to work with, and ends up resulting in me manually prodding things even more than wireguard was. Back to square one. Seafile. Looks like it has a robust client application with the ability to mount to a drive letter on Windows machines! So I set it up... ony to realize that it uses its own block based storage system and requires all data be added into Seafile itself. Is there any way to do what I'm trying to do without a ton of manual upkeep? In the past I used SSHFS and CIFS over SSH tunnels, and it worked perfectly. But I had to maintain keys and passwords manually. I also had to set up each and every client (I mean, I used a script to do it - but it still required ME to do the setup) and there was plenty of manual intervention. I'd like something where users can be added and removed quickly and easily, forgotten passwords can be handled by the individuals, and the application is point and click driven for setup. End goals: - WebUI - Access data on Unraid shares - User management. - Mount as a local filesystem (drive on windows, path on Linux) without syncing all contents. PS: For reference, I'm in a different state than the machines the images are being captured from. I need them to be backed up offsite, for my own peace of mind. There are two backups currently (one a hot spare disk, the other the image that spare was created from) but nothing off site. Prior to me becoming involved, there was no backup. Once its backed up to my unraid server I will back up the important images to the cloud. (Some of them can be rebuilt, it's tedious but otherwise disposable data; others are proprietary encrypted filesystems and must be byte-perfect)
  3. Xaero

    10gbit speeds?

    The M.2 cache drive should be capable of delivering higher speeds than that. Use something more low level than Samba for testing first. https://datapacket.com/blog/10gbps-network-bandwidth-test-iperf-tutorial/ Use iperf to test that the link is actually capable of full 10gbit speed. If not, eliminating the switch is fairly easy (use a crossover cable, and set a static IP on the PC in the same subnet) Rerun the iperf test. If the speed is recitfied without the switch in place - several things can come up; I've heard that these Mikrotrik devices have a "dual OS" and that sometimes the RouterOS works faster than the SwitchOS; and vice-versa, depending on the internal hardware of the device itself. If it is, then you are looking into Samba performance tuning and ensuring both clients are using SMB3 and not SMB2.
  4. I'd wait for a response from Norco. I don't recall which backplane version I have - but mine is fully populated with 24 8tb drives and I have been able to hot swap without any damage to any drives or the backplanes. Mine use a single 4 pin molex power cable each. I know there were variants with SATA, and dual 4 pin molex prior to mine. I bought the case ~4 years ago now.
  5. +1 I have learned my lesson from this.
  6. Ah, I misunderstood the initial request. Yes, that makes sense and is approachable from a software engineering perspective. Start at the source tree level 0. If tree level 0 and all subdirectories fits, just merge with the existing tree. If tree level 0 doesn't fit, jump to tree level 1 and iterate over subdirectories in level 1 until something doesn't fit. Once it doesn't fit make a new tree on another drive. Simple enough. I was thinking this request was for some arbitrary content type. If it's just based on folders its not that hard.
  7. What criteria would this be based on? There has to be the possibility for simple If This Then That logic to decide when/where to split the data. This is a drawback to the merged filesystem approach to bulk data storage that Unraid takes. If the file is larger than a single volume - it can't go on the array at all. Similarly, if a directory or set of directories is larger than a single volume, it must be split across multiple volumes. I can't think of a way any software engineer could approach this in a logical, and universal fashion. Sure, we could write an implementation for one specific use case - but that's about the limit. The only thing I can think of would be to use a "keep together" list and allow selecting specific folders to try to force onto a singular volume. If those directories ever didn't fit the free space of a disk, it would have to store data on another disk. It's just not feasible imho.
  8. Mounted to the back of the motherboard? Excellent. Use some thick thermal pads so that the pad makes contact with the motherboard tray. Problem solved. The motherboard tray is now a giant passive heatsink for your NVME disk.
  9. So I had everything working. Updated only the plugin and now only my first peer can ever get a handshake. And even though I get a handshake - the WebUI, no LAN or server hosted content is accessible, and neither is the internet. I just want a relatively easy to use VPN for myself and the people I have accessing it and I cannot figure out what I'm doing wrong now. I didn't touch any configuration other than updating wireguard from the original release to the one from November. Not sure what all should be posted so here's the censored wireguard config(s) for the half-working peer: /etc/wireguard/wg0.conf: [Interface] #include=/webGui/include/update.wireguard.php #file=/etc/wireguard/wg0.conf #cfg=/etc/wireguard/wg0.cfg #cmd=update #name=Blackhole #vtun=wg0 #wg=active #subnets1=192.168.1.74, 10.253.0.1 #subnets2=192.168.1.0/24, 10.253.0.1 #shared1=192.168.1.74, 10.253.0.0/24 #shared2=192.168.1.0/24, 10.253.0.0/24 #internet=192.168.1.74:51820 #Home PrivateKey= Address=10.253.0.1 ListenPort=51820 PostUp=iptables -t nat -A POSTROUTING -s 10.253.0.0/24 -o br0 -j MASQUERADE PostUp=logger -t wireguard 'Tunnel WireGuard-wg0 started' PostDown=iptables -t nat -D POSTROUTING -s 10.253.0.0/24 -o br0 -j MASQUERADE PostDown=logger -t wireguard 'Tunnel WireGuard-wg0 stopped' [Peer] # PublicKey= PresharedKey= AllowedIPs=10.253.0.0/24, 192.168.1.74, 192.168.1.254 /etc/wireguard/peers/etc/wireguard/peers/peer-Blackhole-wg0-1.conf: [Interface] # PrivateKey= Address=10.253.0.2/32 DNS=192.168.1.254 [Peer] #Home PresharedKey= PublicKey= Endpoint=dyndns:51820 AllowedIPs=192.168.1.0/24, 10.253.0.0/24 I feel like an IPTables rule is missing, but I don't know enough to figure out what it is. My reasoning is that the result of attempting to ping the DNS server (192.168.1.254) results in a Destination Host Unreachable from 10.253.0.1 (the unraid server) The handshake completes so I know the peer keys and endpoints are right, something on the routing side is wrong.
  10. Oh man. I thought something seemed off just hadn't looked into it yet as I noticed a bunch of new content didn't seem to be working properly with tvdb. Phew.
  11. On the contrary I moved to this from Transdroid. Transdroid doesn't let you specify a download location, and is missing other features for rtorrent clients. A shame, really as when I used it with deluge it was fine. Deluge wasn't able to handle some of my larger downloads so I migrated away from it.
  12. This is something quick and dirty I did to enable the use of the Mobile UI plugin for ruTorrent with the Linuxserver.io docker: First: Download and extract the mobile ui plugin from: https://github.com/xombiemp/rutorrentMobile Once downloaded, Extract the plugin into the appdata config folder for rutorrent, naming it "mobile" The contents of that folder should look like this: Once complete, edit your rutorrent docker and add the following path: Afterwards, when a mobile device connects to the rutorrent WebUI it will be presented with a slick mobile interface that looks like this (image stolen from Google):
  13. That also didn't exist until well after this thread was made.
  14. I've been meaning to dig a bit deeper as this solution seems "kludgy" overall (no offense - its a great solution) I think the intended method would be to use a containerized router solution that doesn't update often, if at all. The client containers would connect using that as their network, and then the endpoint connector ("WAN" in a router sense) would have a static IP. All routing would go through that "WAN" container, and if it rebooted/updated it would be a momentary network interruption, rather than having to rebuild all of the containers. I'll update here if I ever get around to doing such a setup.
  15. This disk should be bootable presuming the bios/UEFI settings are correct and that EFI System partition is readable by the UEFI used. That is going to vary heavily from vendor to vendor.
  16. What does fdisk -l /dev/disk/by-id/ata-KINGSTON.... look like? If I'm not mistaken, you'd need to be booting UEFI only or have the UEFI boot order entry for the disk in question before the legacy options for all devices. My experience with UEFI booting is fairly limited. I do see you are using the raw disk (or at least, think you are, the node is cut off) It's possible to treat the first partition of a disk as the disk with VirtIO and then you end up with a disk image stored as a partition.
  17. Need additional information. When using VirtIO did you use the RawDISK or did you create a disk image on a partition on that disk? Is the target machine using BIOS or EFI? What's the VM configuration look like? In theory, with a RawDISK you should be able to yank it from the VM environment and transplant it into a physical machine with no headaches. In fact I use VMs for testing disks that I image rather than a physical box as it simplifies my workflow. Most likely you are trying to boot a UEFI disk with a legacy BIOS or a legacy BIOS compatible disk with UEFI only.
  18. I'll note a few things here; Switching to using Docker-Compose instead of Docker Run solves this automatically. Docker-compose supports always using container or service network types by name. It will just resolve the name to the container ID on startup from the docker-compose.yml file. Docker compose also comes with the benefit of health checks and health even based management (vpn docker goes down? automatically stop all dependent dockers. Comes back up? Starts them back up.) Seems worth looking into; obviously not an easy to implement change as the current docker profile system has been a product of years of development. Would be nice to move to a more standard docker implement like compose though.
  19. Note that since you have read issues it's likely that there will be some corruption of the data. Anything that is corrupt may or may not be recoverable and you will need to replace from a backup.
  20. Excellent workaround! Do you happen to know if it persists through updates to the nordvpn container?
  21. I believe netbios would work with a netbios over tcp implementation. By default; probably not.
  22. I think the latter can be achieved with one more routing rule; I just haven't sat down to figure it out.
  23. See my (old, outdated, don't use it anymore plex has been updated) Plex wrapper script here: https://github.com/Xaero252/unraid-plex-nvdec This script can be added to CA User Scripts to run after your automatic docker updates to reinstall the modifications after the docker has updated. Similarly, you could do this with the pihole docker. I would also suggest pinging the pihole docker maintainer and see if they might be willing to add a layer for your modification. Since its a direct extension of pihole rather than a hack like my plex script, it is more reasonable to include it in the actual docker image as an optional flag.
  24. I only do the overlay as I like persistent settings and bash history. 4GB is massive overkill for that. I also know adding just a couple of lines to the "go" file works but then I have to add lines every time I customize something new. Overlay "just works" I do wholely agree though - for a handful of scripts being added to the 'go' file calling them with their interpreter is sufficient.