BM118

Members
  • Posts

    15
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BM118's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Limetech have actually surprised me here, and in no way, shape or form did I ever expect to see a move like this. The licenses that are already out there will continue to work and are not being forced into a conversion/migration. You will still be able to transfer an existing license to a new flash drive if required. You will continue to receive exactly what you have already purchased, no matter what tier you have today, so there are no broken promises. You will continue to receive an option to upgrade across tiers after the new model comes into place if you need more drives. If, in future, you decide to purchase a “new model” license that has upgrade entitlements, you get the option to decide if the current features and capabilities are sufficient for your needs when your included entitlements comes to an end. If they are, don’t pay the fee for the upgrade entitlements and stick with what you have. If you find in a few months/years down the track that you want some fancy new features then pay for your entitlement at that point. You don’t even have to backpay for the previous years. In no way is this a subscription. If a subscription is not paid before it expires, then that product/service stops working/being accessible until it is paid. This is not what is happening. But on top of all of this, for those users who know they will be using the product for years to come, or have large array requirements, they are still offering a lifetime usage and entitlements license, or if you just don’t want small recurring fees for the entitlements. I personally think this is a great outcome for limetech and myself as a user. I always understand the hesitation in paying more for a product, but it is literally the whole point of comparing solutions against each of your own personal requirements. If you don’t want or need future updates, and you are on a new tier that requires entitlements, then don’t pay for the update entitlements, and stay on the version you are on. If you need/want the updates then pay the small fee for that. For all the people complaining about this being a different situation because it’s an OS, just stop. You either don’t know what are actually talking about (e.g. don’t allow direct inbound connections to UnRAID from the internet), or you actually do know what you are talking about and have determined that your requirement is that you need continual security updates, in which point the answer is quite clear, either find a product that better suits your needs or pay the small fee.
  2. This is an option through PayPal, it’s how I personally purchased my license with fortnightly payments. If your country doesn’t support this inside PayPal, then that’s a problem between you and PayPal. Most countries support an option for something similar outside of that using a bank of your choosing, I recommend you investigate your options there. I know where you are coming from though, cash flow is a real problem, not necessarily the total cost. The irony here is that is exactly the same problem Limetech are trying to fix and has been proven to be the most reliable and preferred option for businesses to deliver on cashflow. They need cashflow to continue innovating and developing the product, not just revenue.
  3. Wondering if anyone can assist. Tl;DR - Mover tuning was (maybe still) not moving all files as expected, however it has been running and moving others. Invoking mover manually moved all the files as expected. This same mover tuning setup has been working since 6.10, during 6.11 and AFAIK 6.12.4. Currently running 6.12.6. Over the Xmas break decided to upgrade from 6.12.4 to 6.12.6, but i doubt thats related. Upgraded from 6.11.5 to 6.12.4 months and months ago, this issue could have been around since then for all i know. It should be noted that some of the affected files were created well before December. I did change my cache setup straight after the 6.11.5 to 6.12 update to be ZFS, but confirmed files were moving at that time (as it is required to change the cache setup). While investigating and making assumptions about files being locked i noticed this was affecting both old (well before the 6.12.6 update) and new files (this week, even some from yesterday). Using the "compute all" button in the shares, i noticed instances of cache still being listed when it should not have been, and other instances of drives being listed when they should not have been (note that all files were in "allowed"/"expected" locations before a move has run, but not after a move has run). confirmed through CLI, looking at the individual drive mounts, that the files were indeed on the wrong drives. I did some checks, and made sure there werent any file locks etc. I even made some edits to some to confirm it was writeable and not locked. This morning i again checked and noticed they were not moved as expected (same files as before) but other files had been moved in other shares. In an effort to monitor it live, i went to the Main tab and scrolled down, clicked on the Invoke Mover now button, and all files were moved as expected. Unfortunately i dont have screenshots of it, but i do have screenshots of the pertinent mover log from early AM this morning, vs the manual mover. Mover logging was not enabled for any of this, but "log when not moving due to rules" is enabled. Mover Tuning (the nextcloud content on the cache should have been moved at this point (1.4GB images), along with various other files in various shares): Manual Mover (what you dont see here is the files in other shares that were also moved):
  4. Also having this issue on 6.11.5 with no customisation to the samba configs, am running up to date UD plug-in as well. I do not have any macOS/Time Machine or fruit configs/setups. This purports to be affecting my Blue Iris NVR system with a standard XFS array based share (it is a hidden share). I do have a BTRFS cache drive that is involved if that helps. For now i will do the logging = 0 steps and see how we go.
  5. So far it seems to be running well, and luckily my board does allow the iGPU to be enabled when CSM is enabled, so in theory it should be all good. I will report back if it has any issues.
  6. It appears this is motherboard dependent, and Asus is a big issue in this area, whereas I am using Gigabyte. Essentially, in the Intel 500 series chipsets and up, if you enable CSM, there is a good chance iGPU is not available. I thought this was an Intel limitation, but some research shows it’s a default thing and each vendor can change this. So I will give this a go tomorrow and see how I go.
  7. While I appreciate the input, I tried a completely fresh, clean install on a brand new flash drive, and get the exact same problem, but only when using 6.11. Additionally this problem occurs well before any plugins etc are loaded, this is occurring on the second file load for the OS to start. Using 6.10, everything works as expected.
  8. BIOS is up to date, running f8a. I was under the impression that with CMS enabled, I lose access to the Internal GFX and therefore QuickSync pass through to the containers, or is this just a windows issue??
  9. Not sure if this should be in bug, or general. TL;DR - On 6.10 I can boot with UEFI, with 6.11.2 and 6.11.3 I cant, turning off UEFI allows it to at least boot, but haven't tested any further (running a Gigabyte Z590 Aorus Master, 11500 CPU, 32GB RAM). I have used 6.10.X for 6+ months with no issues, and done the updates through the UI and rebooting it. I tried around a week ago to do the same with 6.11.2 and the server never came back online. I copied everything from the previous folder on the flash drive back over the same files on the root and turned the box back on and it booted fine; unfortunately didn't have a monitor to see the reported issue. Since then I have tried the 6.11.3 update, same issue, same fix. I finally had the time to put the monitor on and see what was happening so tried the update again, and I get an error about "unable to allocate initramfs, bailing out". I tried a brand new flash drive with the USB creation tool, 6.11.3 install, no config etc i.e. barebone fresh install, exactly the same issue (photo attached). I am 99.9% confident this is not a RAM issue. I turned on CSM support, set the boot mode to legacy, it seems to boot fine, at least with the "fresh" install, but don't want to test with my actual drive until I can understand what the problem is. If I rollback to 6.10.3 everything works fine, no weird errors, ~20 dockers and a few VMs start and run fine with near constant reads and writes to unRAID. Where to from here?
  10. hmm i replied earlier, but the post isn't here. anyways, no not really. I did find the issue with the "Failed to mmap 0000:00:02.0 BAR 2" error message. Its likely the same issue you have based on what I found when looking it up, but basically issue "cat /proc/iomem" on a terminal on unraid and look for that address range. In my case that range had efifb running in it, which it cant do with pass through working. So adding video=efifb:off to the syslinux boot got rid of that error and I thought I had it fixed as I could install the driver in windows (had to install it when had both VNC and the Intel graphics assigned to the VM). So I could get the device to show active and ok in Windows at install, but upon a reboot the VM would lock up, crash and go into boot repair mode. Again removing the Intel graphics lets it boot fine. Tried a couple different versions of the Intel drivers, tried numerous different combinations of BIOS and i1440 and Q35 VM types to no avail. Your boot should look like the below if this is the only modification you have made and run Unraid with the default boot config. Note I only have the one GPU in my system and it was assigned to the Unraid OS, and i wanted to assign the whole thing to the VM. kernel /bzimage append initrd=/bzroot video=efifb:off If anyone can shed any light on getting this to work or even a "it wont work because of this", I would be forever grateful. At this point seems like I will just need to get a video card for the VM as I need either Intel or Nvidia hardware decoding/encoding in that VM. I could then use the internal GPU for my dockers but was hoping to have the dedicated GPU for docker and Intel for the VM.
  11. Hi All, hoping someone can assist, i have been following through some of the others posts about this issue, and think i have it right, but i could be wrong. I am also uncertain if this should be in the prerelease section instead, as i am running 6.10RC4. Simply put, i have a requirement to have a Win10 VM and would like to passthrough the I1th Gen Intel Graphics. Gigabyte Z590 Aorus Master, latest BIOS, 11500 CPU BIOS has CSM disabled, VT-d enabled. IOMMU group shows only IGD on 00:02.0 (8086:4c8a) and is bound to vfio, and logs shows this is successful on boot. Unraid 6.10 RC4 no *GPU plugins installed, and never have on this build Win10 21H2 with 1.215-2 virt drivers. I can create the VM, i440FX 6.2 type with OVMF BIOS with VNC and enable RDP etc. Shut it down, change the graphics from VNC to Intel UHD 750 and turn it back on. At this point everything seems to be going well, however once the drivers are installed either from Windows Update or are installed using the latest driver from Intel's site, device manager shows Error 43 and the device is stopped. If I disable the device in device manager I can reboot the machine, but if I don't disable the device, it wont boot again. If I remove the IGD and change back to VNC, it will boot happily and I can access everything, but as soon as I put the IGD back on it will not boot again (I could force the driver removal to at least get back in but seems pointless). I saw in lots of other posts references to changing the syslinux file with boot strings (?(sorry, I know Windows inside and out, but not this kinda stuff in Linux)) but that all seems to be 6.9 related. I do see some errors in the QEMU log, but uncertain if they are actually a problem. specifically "Failed to mmap 0000:00:02.0 BAR 2. Performance may be slow" and complaining about the ROM contents cannot be read. I assume thats not a problem for an IGD? Have I missed something? Is 11th gen just not supported yet? Is there anyone out there who can point me in the right direction?
  12. Alright, case closer for the time being. I will go back to sleuthing through all the other settings to tune it a bit more. Thanks.
  13. I did, I used 50GB, but it refreshes the screen back to 0. I am a little list now as to what we are trying to say. Am I the only one having this issue or do others running 6.10.0 RC4 have the same issue with setting this value on the disk, or is this not yet meant to be visible and is a new thing or?
  14. Sure. Please see attached. The array is online with a 5TB copy happening at the moment so the field is grayed out. I will get another one when the copy is done tonight if you like.
  15. Hi All, Note: new user to UNRAID. Having an issue configuring the Min Free Space on my disks. Have an 11 disk array with 2 parity’s plus a single disk cache pool. The cache drive I was able to setup Min Free Space as preferred, hit apply and it saves. Even persists through reboots, it’s great. On the spinners though, I enter the same value (50GB) and hit apply, it thinks about it for a second just like it did with the cache, and when the screen refreshes the value is gone. Tried starting/stopping array, rebooting, different values, different browser, incognito all to no avail. I then tried the same value on the share, and it worked there, just like it did for the cache. This leads me to believe I simply don’t understand how this function is meant to work, and I am doing something wrong. Any help would be appreciated.