Jump to content

aarontry

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by aarontry

  1. Does it work on 6.9.2? I am not able to see it in the app store as of now.
  2. Any way to remove device from cache pool through command?
  3. The procedure is not intuitive and there's no indication saying the device is removed or not. I had another unsuccessful attempt. Here's what I did: 1. Stop the array 2. Assign both device to the same pool 3. Start the array 4. Stop the array 5. Delete cache pool and create one with one device and leave the other in UD. 6. Start the array What did I do wrong? saulgoodman-diagnostics-20211123-2008.zip
  4. Thanks for the help! saulgoodman-diagnostics-20211121-1934.zip
  5. I did the procedure but it's not working. I stopped the array and chose no device in the pool. Then I started the array to forget the cache config and stopped the array again to configure two separate cache pools each with an SSD. The two SSDs are still in sync. What steps am I missing?
  6. Hi, I had a cache pool setup with two 512GB SSDs in brtfs. A few days ago one of the drive reported two errors in the SMART page so I decided to remove it from the pool. I created a new pool only with the good SSD and left the bad SSD in UD. This is where I saw some weird things. The bad SSD seems to be linked to the good one in some way maybe the RAID setup is still effective. When I delete or create files on the good SSD the same action mirrored on the bad one sitting in the UD! I would expect the new pool to break the RAID 0 setup from the old cache pool. What am I missing here?
  7. I had the same issue after moving vDisk from cache to array. The problem for me was when I point the vDisk path to the new location the type of vDisk changed from "qcow2" to "raw". So after I change this back to qcow2 everything works perfectly.
  8. I have also noticed the virtio-net is much slower than virtio. Could someone explain why the newer and now default one is slower than the old virtio?
  9. Hi guys, I’ve been playing with dynamic CPU allocation configurations recently and I’m wondering if it’s possible to have memory allocation dynamically associated with allocated vCPUs on the same NUMA node. It sounds inefficient but maybe the virtmanager is smart enough not to allocate any vCPUs from a remote NUMA node. Any thoughts will be appreciated!
  10. I am idle around 150W. 2 x E5-2680v4 + 128GB ECC DDR4 + 1 10Gbps NIC + 6 HDD (1 active, 5 spindown) + 7 fans around 850 RPM on Supermicro X10DRi.
  11. I have a X10DRi with 2 Xeon E5-2680v4 and 128GB ECC DDR4 ( 4 x 32GB). I have 2 DIMMs per socket and they are fast enough in dual channel mode for most of the work (a bunch of Windows Server VMs and Ubuntu for software development and testing). For NUMA and memory configuration you can take a look at this article: https://frankdenneman.nl/2016/07/13/numa-deep-dive-4-local-memory-optimization/ Recently I purchased an used 10Gbps NIC (Supermicro AOC-STG-i2T) and I am passing through the two ports for the VMs.
  12. What's the purpose of the new plugin (unraid.net) if VPN is the preferred way of accessing unRaid? I already have the VPN setup and I am considering switching to the plugin instead.
  13. The only vulnerability I can think of regarding the security of unRaid Server in this context is there might be undiscovered security issues that allow attackers to bypass the form based login and gain access to other services.
  14. Thanks for the sharing on security. I often need to access unRaid GUI while I'm out on a trip. I used to use OpenVPN to connect to home and access the management gui from LAN. Now with 6.9.2 I have the port forwarding setup for HTTPS to unRaid and it's the only port I am exposing on the internet. A strong root password has been set and all other services are behind my firewall. So now my question is: Is it equally safe to access my server this way compare to accessing through OpenVPN?
  15. M/B: Supermicro X10DRi CPU: Dual Xeon E5-2680v4 RAM: 4 x 32GB DDR4 ECC Running 1 Win 10, 3 windows server 2019 and a few sql server dockers for development work plus 20 others dockers for personal stuff. RAM usage is about 70%.
  16. I’m on 6.9.1 and the problem is still reproducible.
  17. I'm on 6.9.1 and i noticed that if the default route is eth1 even if i connect through the ip of eth0 copying files are still going through eth1.
  18. This is a great plugin and everything works perfectly! Recently I turned on the compression and I see that gzip would only use 1 thread to compress files and it takes so long to compress a 40GB ish database file. Any plans for adding multithreaded compression to the plugin or maybe adding zstd option for faster compression speed and better ratio? Also what would the best way so far to backup sql server 2019 dockers with 200GB+ database files (.mdf and .bak) in appdata folder?
  19. It works if I specify the static path like "/mnt/cache/appdata/mssql/...". It's so weird that the docker container doesn't like the other mapping. Could it be a bug of MSSQL or Unraid?
  20. I meant /mnt/user... . So my appdata folder is set to prefer to use cache and when I map volume to appdata it errors out.
  21. Hey guys, I am running Unraid 6.8.1 on a Xeon E3 1231v3 with 16GB of RAM. I was trying to run MSSQL (SQL Server 2019) docker but got an error: 2020-05-04 20:04:21.89 spid9s Starting up database 'master'. 2020-05-04 20:04:22.32 Server Common language runtime (CLR) functionality initialized. 2020-05-04 20:04:23.22 spid9s Error: 17053, Severity: 16, State: 1. 2020-05-04 20:04:23.22 spid9s FCB::ZeroFile(), GetOverLappedResult(): Operating system error 87(The parameter is incorrect.) encountered. 2020-05-04 20:04:23.23 spid9s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild. After searching around and a few tries on another unraid server (also running 6.8.1) without cache drives setup, I found that the error only occurs when I use volume mappings on a cache drive. If I map volumes to a non-cache drive everything is fine. However, other docker containers are running just fine on cache drives. So i had to setup another data share just for this SQL Server docker. Does anyone know why this is happening? I would appreciate any thoughts on this. Thanks!
×
×
  • Create New...