Leaderboard

Popular Content

Showing content with the highest reputation on 09/13/18 in all areas

  1. It was true a month ago. In reality it's only a five minute job to change the DNS to cloudflare. No massive drama
    1 point
  2. On further thought, I can't in good conscience leave this as is. Unraid does not have an easy method of keeping up with the health of RAID arrays presented in bulk by hardware RAID cards, it is designed to manage each disk individually. This means that it is on you to keep up with the status of the drives in those pools, and when you have a drive failure, unraid won't be able to warn you. When you exceed the number of allowed drive failures in that array, you will lose ALL the data on it, and unraid will not have been able to warn or protect you. What others in your situation have done is create individual RAID0 arrays for each of your disks, then allow unraid to use those individual disks relatively normally. This at least allows unraid to handle complete drive failures gracefully if they are part of the parity protected array. SMART status and predictive health isn't automatically supported, but at least when a drive fails, it can be properly emulated. The downside is that your drive speeds won't benefit from striping, you will be limited to the read throughput of each drive individually, and write performance will be roughly halved. Also, my suggestion of leaving your RAID arrays as unassigned devices has its drawbacks as well. Unassigned devices don't participate in user shares. You have to set up mount points and keep up with them yourself. You are stepping outside the normal usage for unraid, and thus the general advice that you find here on the forums may not apply to your setup. Some things that are easy to do if you follow the norms will be either not possible or difficult to accomplish on your RAID volumes. Unraid and RAID only hardware do NOT mix well. You have been warned. RAID and unraid are NOT a replacement for a good backup plan. Data loss has many causes, but your configuration has more risk than most. If you have a high tolerance for learning new things, going out on your own and forging new paths, then by all means, continue as planned. If you would rather take the easy path and follow where many have already been, then you need to source hardware that is tried and tested to work well.
    1 point
  3. You are aware that since you are presenting a 30TB volume to unraid as a single disk, you will need a second 30TB volume to be able to use unraid's parity protection, right? With the drives you have outlined, the only logical (to me) layout is using the 12 drives in the parity protected array, and keeping your two current RAID volumes as unassigned devices. You will lose write speed to your RAID1 if it's part of the unraid parity scheme, and you will lose the entire capacity of the RAID6, because it will have to be one of the parity drives. You can't have data slots with larger capacity than either of the parity slots.
    1 point
  4. Hi there, in June/July I had to setup a gaming rig for the 12 year old son of a friend. They had a limited bugdet of around 700 EURO, so roughly 820 USD. See following compilation: MB - Gigabyte X470 Aorus Ultra Gaming AMD X470 So.AM4 Dual Channel DDR4 ATX Retai - Cost $130 - Amazon CPU - AMD Ryzen 5 2400G 4x 3.60GHz So.AM4 BOX - Cost $159 - Amazon RAM - 16GB G.Skill RipJaws V schwarz DDR4-3200 DIMM CL16 Dual Kit - Cost $153 - Amazon Power Supply - 550 Watt Seasonic FOCUS Plus Modular 80+ Gold - Cost $74 - Amazon Graphics - Gigabyte GeForce GTX 1060 Mini ITX OC 6GB GDDR5 Graphics Cards (GV-N1060IXOC-6GD) - Cost $280 - Amazon M.2 - Intel Optane Memory Module 32 GB PCIe M.2 80mm MEMPEK1W032GAXT - Cost $58 - Amazon HD - HGST/Hitachi (HUA723020ALA641) Ultrastar 7K3000 2TB 64MB 7200RPM 3.5" (Enterprise Grade) SATA III 6.0Gb/s Hard Drive - Cost $55 SSD - 500GB Crucial MX500 2.5" (6.4cm) SATA 6Gb/s 3D-NAND TLC (CT500MX500SSD1) - Cost $99 - Amazon Case - MasterBox 5 Mid-tower Computer Case with Internal Configuration - ATX, Micro ATX, Mini ITX Supported - Black - Cost $66 - Amazon ODD - LG Electronics GH24NSD1 DVD-Writer SATA Bulk - Cost $27 - Amazon OS - Windows 10 Home - Cost $99 - Amazon Total: $807 x2 = $1614 Yannick is completely satisfied with his rig, fast enough for his games like minecraft, fortnite etc. The system is really silent, almost noiseless. The MB is fit for the future, ready for Ryzen 7 2700X, if more power is needed. If more GPU power is needed just buy a dedicated one. OS Win 10 Pro was just around 5 bucks, in Germany we can buy legal keys of unused OEM-SW. Microsoft doesn´t like this, but it´s legal. And it works. I hope you enjoy this suggestion.
    1 point
  5. The only thing you need to do is enable strict port forwarding and connect to a supported endpoint, the rest is done automatically.
    1 point
  6. That is a very nice balanced barebone build for gaming. However, I think with that budget, you can for sure build an unRAID Ryzen 2 gamer 1 PC build.
    1 point
  7. Theoretically yes. But even theoretically, the benefit is minimal. No, it doesnt work that way. The cache does not play any role in calculating capacity. I have seen various 4TB drives to have slightly different sizes in the past but it has nothing to do with how much cache they had.
    1 point
  8. 1 point
  9. I think the path of least resistance is going to go something like this. 1. Find linux live distribution with appropriate drivers. 2. Allocate enough free space for your anticipated data recovery on either your cache or an unassigned devices drive. 3. Boot linux distribution (1) and mount ZFS pool and target device, copy data. 4. Profit?
    1 point
  10. If you keep it sparse it will only use the actually used space, more info here.
    1 point
  11. I'll be honest, I would rather see the limited resource available spent on the product (i.e. unRAID e.g. fix bugs and add features) rather than trying to please the vocal few who can't deal with change. I think this set a precedence that things are done for whoever wins the "who screams the loudest" contest (with bonus points for the "who writes the vaguest complaint" award). #ProgressIsGood
    1 point
  12. Add :testing to the repository field.
    1 point
  13. As a software engineer, I very very much appreciate the dark theme.
    1 point
  14. He was wondering if Limetech quit which would explain why the forums were so quiet, but that is not quite the case and they were just moving. ?
    1 point
  15. How do I keep my sparse vdisk as small as possible? How do enable trim on my Windows 8/10 or Windows Server 2012/2016 VM? NOTE: according to this post by @aim60virtio devices also support discard on recent versions of qemu, so you just need to add the discard='unmap' option to the XML. Still going to leave the older info here for now just in case. By default vdisks are sparse, i.e., you can chose 30GB capacity but it will only allocate the actual required space and use more as required, you can see the current capacity vs allocated size by clicking on the VM name, problem is that over time as files are written and deleted, updates installed, etc, the vdisk grows and it doesn't recover from the deleted files. This has two consequences, space is wasted and if the vdisk in on an SSD that unused space is not trimmed, it's possible to "re-sparsify" the vdisk e.g., by cping it to another file, but this it's not very practical and there's a better way. You can use the vitio-scsi controller together with discard='unmap', this allows Windows 8/10 to detect the vdisk as "thin provisioned drive", and any files deleted on the vdisk are immediately recovered as free space on the host (might not work if the vdisk is on a HDD), and this also allows fstrim to then trim those now free sectors when the vdisk is on an SSD. On an existing vdisk it's also possible to run Windows defrag to recover all unused space after changing to that controller. Steps to change an existing Windows8/10 VM (also works for Windows Server 2012/2016): 1) First we need to install the SCSI controller, shutdown the VM (For Windows 8/10 I recommend disabling Windows fast Startup -> Control Panel\All Control Panel Items\Power Options\System Settings before shutdown, or else the VM might crash on first boot after changing the controller). Then edit the VM in form mode (toggle between form and XML views is on the upper right side), and change an existing device other than your main vdisk or virtio driver cdrom to SCSI, for example your OS installation device if you still have it, if not you can also add a second small vdisk and chose SCSI as the vdisk bus, save changes 2) Start the VM and install the driver for the new "SCSI controller", look for it on the virtio driver ISO (e.g., vioscsi\w10) 3) Shutdown the VM, edit the VM again, again using the form view and change the main vdisk controller to "SCSI", now change view to XML and add "discard='unmap'" to the SCSI controller: Add after cache='writeback', e.g. before: After: 4) Start the VM (if you added a 2nd vdisk you can remove it now before starting), it's should boot normally, you can re-enable Windows fast startup. 5) Run Windows Defrag and Optimize drives, check that he disk is now detected as "Thin provisioned drive" and run optimize to recover all previous unused space. From now on all deleted files on the vdisk should be immediately trimmed. Note: If after this you edit the VM using the GUI editor these changes will be lost and will need to be redone.
    1 point
  16. How do I create a vdisk snapshot on a btrfs device? There are two methods of creating instant copy/snapshot with btrfs, if it's a single file like a vdisk use the first one, if you want to snapshot an entire folder use the 2nd, e.g., I have all my VMs in the same folder/subvolume so with a single snapshot they are all backed up instantly. Method 1 - Creating a reflink copy: Simple way for making an instant copy of a vdisk creating a reflink copy, which is essentially a file-level snapshot: cp --reflink /path/vdisk1.img /path/backup1.img Requirements: Both the source and destination file must use the same BTRFS volume, can be a single device or pool. Copy-on-write on (enable by default) Method 2 - Snapshot: Btrfs can only snapshot a subvolume, so first thing we need to do is create one, example below uses the cache device, it can also be done on an unassigned device adjusting the paths. btrfs subvolume create /mnt/cache/VMs You can use any name you want for the subvolume, I use one for all my VMs. The subvolume will look like a normal folder, so if it's a new VM create a new folder inside the subvolume with the vdisk (e.g: /mnt/cache/VMs/Win10/vdisk1.img), you can also move an existing vdisk there and edit the VM template. Now you can create a snapshot at any time (including with the VM running, but although this works it's probably not recommended since the backup will be in a crash consistent state), to do that use: btrfs subvolume snapshot /mnt/cache/VMs /mnt/cache/VMs_backup If at any time you wanted to go back to an earlier snapshot, stop the VM, and move the snapshot or edit the VM and change the vdisk location to a snapshot. To replace the vdisk with a snapshot: mv /mnt/cache/VMs_backup/Win10/vdisk1.img /mnt/cache/VMs/Win10/vdisk1.img or edit the VM and change vdisk path to confirm this is the one you want before moving it: e.g., change from /mnt/cache/VMs/Win10/vdisk1.img to /mnt/cache/VMs_backup/Win10/vdisk1.img Boot the VM, confirm this is the snapshot you want, shutdown and move it to the original location using the mv command above. Using btrfs send/receive to make incremental backups: Snapshots are very nice but they are not really a backup, because if there's a problem with the device (or even serious filesystem corruption) you'll lose your VMs and snapshots, using btrfs send/receive you can make very fast copies (except for the initial one) of you snapshots to another btrfs device (it can be an array device or an unassigned device). Send/receive only works with read only snapshots, so they need to be created with -r option, e.g.: btrfs subvolume snapshot -r /mnt/cache/VMs /mnt/cache/VMs_backup Run sync to ensure that the snapshot has been written to disk: sync Now you can use send/receive to make a backup of the initial snapshot: btrfs send /source/path | btrfs receive /destination/path e.g.: btrfs send /mnt/cache/VMs_backup | btrfs receive /mnt/disk1 no need to create destination subvolume, it will be automatically created. Now for the incremental backups, say it's been some time since the initial snapshot so you'll do a new one: btrfs subvolume snapshot -r /mnt/cache/VMs /mnt/cache/VMs_backup_01-Jan-2017 Run sync to ensure that the snapshot has been written to disk: sync Now we'll use both the initial and current snapshots for btrfs to send only the new data to the destination: btrfs send -p /mnt/cache/VMs_backup /mnt/cache/VMs_backup_01-Jan-2017 | btrfs receive /mnt/disk1 A VMs_backup_01-Jan-2017 will be created on the destination but much faster than the initial copy, e.g., my incremental backup of 8 VMs took less than a minute. A few extra observations: To list all subvolumes/snapshots use: btrfs subvolume list /mnt/cache You can also delete older unneeded volumes/snapshots, e.g: btrfs subvolume delete /mnt/cache/VMs_backup To change a snapshot from read only to read/write use btrfs property, e.g.: btrfs property set /mnt/cache/VMs_backup_01012017 ro false A snapshot only uses the space required for the changes made to a subvolume, so you can have a 30GB vdisk and 10 snapshots using only a few MB of space if your vdisk is mostly static.
    1 point
  17. 1) generate SSH keys on your client machine: ssh-keygen -t rsa -b 4096 -C "[email protected]" 2) add generated key to your client: eval "$(ssh-agent -s)" ssh-add ~/.ssh/id_rsa 3) copy generated public key to your UNRAID server using: ssh-copy-id -i ~/.ssh/id_rsa.pub root@tower Then login to your UNRAID over SSH and: 4) Copy authorized_keys from root user home to flash using: cp /root/.ssh/authorized_keys /boot/config/ssh/ 5) edit /boot/config/go on flash and add this line: mkdir /root/.ssh/ cp /boot/config/ssh/authorized_keys /root/.ssh/authorized_keys chmod 700 /root/.ssh chmod 600 /root/.ssh/authorized_keys By this method you will have accessible SSH connection to your UNRAID over keys you generated even after UNRAID restarts. Hope this helped
    1 point
  18. +1 I've been trying some of the other "management" ui's and they are not a "user friendly" as unRAIDs.
    1 point