Leaderboard

Popular Content

Showing content with the highest reputation on 03/22/19 in all areas

  1. I can confirm this worked for me although I did end up doing a blanket auth on the proxy. Thanks again!
    2 points
  2. im on it guys, looks like there has been a switch to .net core which requires changes to the code which ive now done, new image now building
    2 points
  3. Ok, this may be dumb, but I have a use case that this would be really effective on. Currently I pass trough 2 unassigned 10k drives to a vm as scratch disks for audio/video editing. In the vm, they are then setup as Raid 0. Super fast. The problem becomes that the drives are then bound to that VM. I can't use the disks for any other vm nor span a disk image (work areas) for separate vm's on that pool. I think it would be better to have the host (unRaid) manage the raid, and then mount the "network" disks and use them that way. Since the vm uses paravirtualized 10GBE adapters, performance should be no issue. And multiple vm's could access them as well. Why don't I just add more drives to my current cache pool? Separation. I don't want the dockers that are running, or the mover, or anything else to interfere with performance. Plus, i'm not sure how mixing SSD's and spinners would work out. Maybe ok? I'm sure someone has done that. TLDR: Essentially I'm suggesting that we be able to have more than one pool of drives in a specifiable raid setup (0 and 10! please!)
    1 point
  4. As mentioned earlier, you need msgpack, setuptools and Python3 However, that will lately result in the message "Using a pure-python msgpack! This will result in lower performance." when running backups. From what I can tell, it will only use 1 core, thus slowing down backups quite a lot.
    1 point
  5. yay, a giveaway, I'm in... largewhiteguy on twitter :)
    1 point
  6. Not with "high water", if want to fill disk1 first up to minimum free space use "fill up" instead.
    1 point
  7. Dude, thanks. The price of a nice pint of UK beer is on paypal route to you.
    1 point
  8. Thanks, just did a force update and mine started right back up! Appreciate all you do for us!
    1 point
  9. pm8001 is the chip used in the Adaptec, would suggest trying again using the onboard SATA ports if possible or replace the Adaptec with one of the recommended LSI HBAs.
    1 point
  10. Hi, my Twitter Account is exxxtr3me, don't have a build right now that's worth to share, just started with unRaid.
    1 point
  11. for most apps probably not, but as you can see some applications are coded in such a way to only use the /RPC2 mount (nzb360 being one) and in the scenario where sonarr/radarr maybe installed at a remote location connected over the internet, i think its probably safer to expose RPC2 than it would be to port forward port 5000, unless im missing something here?. i could look at adding in an env var to control whether you allow access to /RPC2 or not, its a bit of work so it may take a while before its in place (lots going on). whilst i agree that having it disabled by default is the way to go, i do not want to shut the door on existing users who currently may rely on this, so i will be setting the default to disabled for all new users and leaving it enabled for all existing, they can then decide whether they want to disable it or not via the new env var.
    1 point
  12. Alright I got myself all setup now. I really wanted to get my first GPU working for the VM that way anytime I boot my server I've got video output. So I passed the vbios ROM to the VM with the first GPU passed through, but that just gave me an error 43 still. Then I found a post about making sure the server was booting in legacy mode and not UEFI mode. BINGO! That did the trick. I now have my first slot GPU passed through to my VM, driver loads fine, and no flickering or video distortion like I was getting before. 😁
    1 point
  13. QEMU 4.0 RC0 has been released - https://www.qemu.org/download/#source And a nice specific mention in the changelog to things discussed in this thread (https://wiki.qemu.org/ChangeLog/4.0): Now that these changes are standard with the Q35 machinetype in 4.0, I think this could also be an additional argument against potentially forcing Windows based VMs to the i440fx machine type if this brings things into performance parity? If @limetech could throw this into the next RC for people to test out, that would be much appreciated!
    1 point
  14. Backup vm xml files and ovmf nvram files. This script backs up vm xml files and ovmf nvram files to a folder of your choice. they are put into a dated folder. Just set the location in the script. #!/bin/bash #backs up #change the location below to your backup location backuplocation="/mnt/user/test/" # do not alter below this line datestamp="_"`date '+%d_%b_%Y'` dir="$backuplocation"/vmsettings/"$datestamp" # dont change anything below here if [ ! -d $dir ] ; then echo "making folder for todays date $datestamp" # make the directory as it doesnt exist mkdir -vp $dir else echo "As $dir exists continuing." fi echo "Saving vm xml files" rsync -a --no-o /etc/libvirt/qemu/*xml $dir/xml/ echo "Saving ovmf nvram" rsync -a --no-o /etc/libvirt/qemu/nvram/* $dir/nvram/ chmod -R 777 $dir sleep 5 exit vm_settings_backup.zip
    1 point
  15. Can I change my btrfs pool to RAID0 or other modes? Yes, for now it can only be manually changed, new config will stick after a reboot, but note that changing the pool using the WebGUI, e.g., adding a device, will return cache pool to default RAID1 mode (note: starting with unRAID v6.3.3 cache pool profile in use will be maintained when a new device is added using the WebGUI, except when another device is added to a single device cache, in that case it will create a raid1 pool), you can add, replace or remove a device and maintain the profile in use following the appropriate procedure on the FAQ (remove only if it does not go below the minimum number of devices required for that specific profile). It's normal to get a "Cache pool BTRFS too many profiles" warning during the conversion, just acknowledge it. These are the available modes (enter these commands on the cache page balance window e click balance**, note if the command doesn't work type it instead of using copy/past from the forum, sometimes extra characters are pasted and the balance won't work) ** Since v6.8.3 you can chose the profile you want from the drop-down window and it's not possible to type a custom command: All the command below can still be used on the console: Single: requires 1 device only, it's also the only way of using all space from different size devices, btrfs's way of doing a JBOD spanned volume, no performance gains vs single disk or RAID1 btrfs balance start -dconvert=single -mconvert=raid1 /mnt/cache RAID0: requires 2 device, best performance, no redundancy, if used with different size devices only 2 x capacity of smallest device will be available, even if reported space is larger. btrfs balance start -dconvert=raid0 -mconvert=raid1 /mnt/cache RAID1: default, requires at least 2 devices, to use full capacity of a 2 device pool they all need to be the same size. btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache RAID10: requires at least 4 devices, to use full capacity of a 4 device pool they all need to be the same size. btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache RAID5/6 still has some issues and should be used with care, though most serious issues have been fixed on current kernel at this of this edit 4.14.x RAID5: requires at least 3 devices. btrfs balance start -dconvert=raid5 -mconvert=raid1 /mnt/cache RAID6: requires at least 4 devices. btrfs balance start -dconvert=raid6 -mconvert=raid1 /mnt/cache Note about raid6**: because metadata is raid1 it can only handle 1 missing device, but it can still help with a URE on a second disk during a replace, since metadata uses a very small portion of the drive, you can use raid5/6 for metadata but it's currently not recommended because of the write hole issue, it can for example blowup the entire filesystem after an unclean shutdown. ** Starting with Unraid v6.9-beta1 btrfs includes support for raid1 with 3 and 4 copies, raid1c3 and raidc4, so you can use raid1c3 for metadata to have the same redundancy as raid6 for data (but note that the pool won't mount if you downgrade to an earlier release before converting back to a supported profile on the older kernel): btrfs balance start -dconvert=raid6 -mconvert=raid1c3 /mnt/cache Obs: -d refers to the data, -m to the metadata, metadata should be left redundant, i.e., you can have a RAID0 pool with RAID1 metadata, metadata takes up very little space and the added protection can be valuable. When changing pool mode confirm that when the balance is done data is all in the new selected mode, check "btrfs filesystem df"on the cache page, this is how a RAID10 pool should look like: If there is more than one data mode displayed, do the balance again with the mode you want, for some unRAID releases and the included btrfs-tools, eg, v6.1 and v6.2 it's normal needing to run the balance twice.
    1 point