Leaderboard

Popular Content

Showing content with the highest reputation on 10/19/18 in all areas

  1. What exactly would you have us do? Send a strongly worded email to Gigabyte?
    2 points
  2. Just a simple little plugin to act as a front end for any little scripts that you may have that you may need to run every once in a while, and don't feel like dropping down to the command line to do it. (Or anything that I happen to run across here on the forum that will be of use to some people) Install it via Community Applications Only a couple included scripts: - Delete .DS_Store files from the array - Delete any dangling images from the docker.img file - Display the size of the docker container's log files (to see if a docker app is filling up the image file through excessive logging) Additional Scripts from myself (and hopefully other users) can be found here: To add your own scripts: Within the flash drive folder config/plugins/user.scripts/scripts create a new folder (each script is going to have its own folder) - The name doesn't matter but it can only contain the following characters: letters ([A-Za-z]), digits ([0-9]), hyphens ("-"), underscores ("_"), colons (":"), periods ("."), and spaces (" ") Or, you can hit the button that says "Add Script", then give the script a name. Hovering over the script's name will then give you additional options. Including online editing... Create a file called description that contains the description of the script. Create a file called script this will be the actual script. Few notes: So as to make things easier for people: The script file can be created with any text editor you choose. DOS line endings will be automatically converted to Linux style endings prior to execution. #!bin/bash is automatically added to every script prior to execution to help out the noobies EDIT: This is only added if no interpreter is specified (ie: #!/bin/bash) If an interpreter is already specified (ie: #!/usr/bin/php), then line is not added Techie Notes: The scripts are actually copied and executed from /usr/local/emhttp/plugins/user.scripts/ /tmp/user.scripts/tmpScripts so if there are dependencies (other scripts, etc) stored in the same folder as the script file, you will need to specify the full path to them. Interactive (ie: answering yes/no to a question) scripts will not work.
    1 point
  3. Getting desperate now to narrow issue. I had my Unraid server set to static IP and Gateway in Unraid itself. I noticed on the UBNT side that when I tried to set a static address on the router/USG for this server the Unifi Controller would give me an error. And it seems odd to me that my network drops appear to be the router losing the IP and route to my server on the XG but not the network link connection itself. So last night I set Unraid to dynamic IP and gateway and it pulled a new IP address while still connected to the XG. I then went back to the Unifi side and this time it let me set the 'fixed' IP address for the Unraid server on the new IP. I rebooted the server and it cleanly connected to the network and was assigned the correct IP address by the USG. That connection has been up on the XG all night now. Will see how it goes today. I'm speculating that the XG is somehow losing the route to the Unraid server over time when the IP is being set on the Unraid side. But if I let the network/router manage setting the address and gateway addressing I'm wondering if this will change. Could be totally off the mark here but just trying to narrow things down.
    1 point
  4. Have you seen this discussion? It appears to address your scenario with secure SSH across the Internet.
    1 point
  5. Sorry can't point to a specific post but Tom mentioned several times that multiple cache pools are on the roadmap, thought can't tell if it's going to happen on the next release or v8, but it should be soon™
    1 point
  6. Harmless display error caused by many plugins (not just mine). Mine have now been all updated to avoid this.
    1 point
  7. To clarify: in the case of a single "disk1" and either one or two parity devices, the md/unraid driver will write the same content to disk1 and to either or both parity devices, without invoking XOR to generate P and without using matrix arithmetic to calculate Q. Hence in terms of writes, single disk1 with one parity device functions identically to a 2-way mirror, and with a second parity device, as a 3-way mirror. The difference comes in reads. In a typical N-way mirror, the s/w keeps track of the HDD head position of each device and when a read comes along, chooses the device which might result in the least seek time. This particular optimization has not been coded into md/unraid - all reads will directly read disk1. Also, N-way mirrors might spread reads to all the devices, md/unraid doesn't have this optimization either. Note things are different if, instead of a single disk1, you have single disk2 (or disk3, etc), and also a parity2 device (Q). In this case parity2 will be calculated, so not a true N-way mirror in this case.
    1 point
  8. shareUserInclude="disk1, disk2, disk3, disk4, disk5, disk6, disk7, disk8, disk9" On Settings -> Global Share Settings, uncheck all included disks (or check all), but best to uncheck or it will happen again if/when you add more disks.
    1 point
  9. Seems like 6.6.2 fixed the issue my issue with GUI mode not working. Thanks Limetech.
    1 point
  10. I could not see anything from the Release Notes, but does anyone know if 6.6.2 fixes the issue of Custom Parity Check Schedules being ignored. I have Checks scheduled for the first Monday of the month every 3rd month, and since updating to 6.6.1 Parity Checks have commenced EVERY Monday.
    1 point
  11. I suggested a similar feature a while ago. Basically, due to how Unraid has grown in v6, we really need the capability to define & run multiple tier 2 pools (using BTRFS raid). T1 being the main Unraid pool, and several T2 pools like: Apps, Cache, VMs with optional mover support on each pool. Unassigned devices is a really nice plugin, but would be depended on a lot less with definable T2 pools. As a matter of course, I don't even understand why Unassigned devices isn't already integrated instead of a plugin given how integral it is to obtaining seriously increased functionality. LT took the first step in moving past being simply a storage OS with v6, now they need to really embrace it and add in the requisite storage options to make good use of the new features.
    1 point
  12. This right here describes my thoughts exactly. Please make this a feature!
    1 point
  13. Respectfully, that's what the min free space setting is for. As long as you use a share that is set to cache:prefer for stuff that you want to live on the cache drive and cache:yes for stuff that will be moved to the array, and have a proper setting for min free space, you will never run into this specific issue.
    1 point
  14. ssd drives were purchased 8 months ago.. so, not that old. I'm talking about them because you said and that is not true. source: actual usage numbers presented previously of the crappiest SSD which I happen own and use (in addition to some nicer ones.) You then try to change your assertion by adding an excluding argument via the Sandforce comment, but your "crappiest SSD" claim remains disproven. I've even provided the read time on the raid0 10k drives vs the cheap ssd. If you skipped that part, the summary was that it was better. Your argument based on cheap ssd's is invalid and incorrect. This isn't an opinion, it isn't what i "think." These are actual numbers provided from real case usage scenarios and benchmarks I've done on the hardware. You also fail to address the i/o limitation based on actual use scenarios as outlined above. Probably because either it doesn't effect your usage scenario, or you realized cost becomes a factor quickly to try and mitigate that bottleneck. But to your assertion, let's take it to the logical conclusion which is using more and larger SSD's in the current pool configuration. For my needs and wants, and those of several others, I could go purchase four 2 tb quality ssd's for $2400 USD, put them in raid 10, and they'd give me 4 tb redundant with over 1GBPS read/write, and a decent amount of bandwidth since it's my only option because I have no more pci-e slots to use. That sounds like a great thing to do, you know, spend more on 4 drives than my entire server cluster cost to assemble (including the rack and double APC battery backup system.) Why 2 tb drives? I have a few vm's that run at the same time and I need a few persistent scratch disks to go with them. As also described, my file sizes are quite large and require more space on the cache drive than what would typically be available when also hosting vm's and docker downloads. So a couple 120gb drives won't cut it. From a cost factor, I could buy 8 wd red 2 tb spinners for the price of 1 good 2 tb ssd. In raid 10 it they would reach or exceed the single ssd drive read/write speeds AND provide data redundancy not found on the single ssd. not to mention that i'd have 4x the storage over the single drive, and 2x the storage over the 2400 dollar raid 10 ssd pool while saving 3/4 the total price. To use only cheap drives, with an average cost of 50-75 dollars, I would need a pool of at last thirty-three 120gb drives to get near 2 tb with no redundancy. That's 1,650-2,475 usd. I currently only have space for 23 drives on my main server so that's not doable. And also, that's ridiculous. I'm not Linus, and I don't get storinators or boxes of SSD's for free. ¯\_(ツ)_/¯
    1 point
  15. Can I manually create and use multiple btrfs pools? Multiple cache pools are supported since v6.9, but for some use cases this can still be useful, you can also use multiple btrfs pools with the help of the Unassigned Devices plugin. There are some limitations and most operations creating and maintaining the pool will need to be made using the command line, so if you're not comfortable with that wait for LT to add the feature. If you want to use the now, here's how: -If you don't have yet install the Unassigned Devices plugin. -Better to start with clean/wiped devices, so wipe them or delete any existing partitions. -Using UD format the 1st device using btrfs, choose the mount point name and optionally activate auto mount and share -Using UD mount the 1st device, for this example it will be mounted at /mnt/disks/yourpoolpath -Using UD format the 2nd device using btrfs, no need to change the mount point name, and leave auto mount and share disable. -Now on the console/SSH add the device to the pool by typing: btrfs dev add -f /dev/sdX1 /mnt/disks/yourpoolpath Replace X with correct identifier, note the 1 in the end to specify the partition (for NVMe devices add p1, e.g. /dev/nvme0n1p1) -Device will be added and you will see the extra space on the 1st disk free space graph, whole pool will be accessible in the original mount point, in this example: /mnt/disks/yourpoolpath -By default the disk is added in single profile mode, i.e., it will extend the existing volume, you can change that to other profiles, like raid0, raid1, etc, e.g., to change to raid1 type: btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/disks/yourpoolpath See here for the other available modes. -If you want to add more devices to that pool just repeat the process above Notes -Only mount the first device with UD, all other members will mount together despite nothing being shown on UD's GUI, same to unmount, just unmount the 1st device to unmount the pool. -It appears that if you mount the pool using the 1st device used/free space are correctly reported by UD, unlike if you mount using e.g. the 2nd device, still for some configurations the space might be incorrectly reported, you can always check it using the command line: btrfs fi usage /mnt/disks/yourpoolpath -You can have as many unassigned pools as you want, example how it looks on UD: sdb+sdc+sdd+sde are part of a raid5 pool, sdf+sdg are part of raid1 pool, sdh+sdi+sdn+sdo+sdp are another raid5 pool, note that UD sorts the devices by identifier (sdX), so if sdp was part of the first pool it would still appear last, UD doesn't reorder the devices based on if they are part of a specific pool. You can also see some of the limitations, i.e., no temperature is shown for the secondary pools members, though you can see temps for all devices on the dashboard page, still it allows to easily use multiple pools until LT adds multiple cache pools to Unraid. Remove a device: -to remove a device from a pool type (assuming there's enough free space): btrfs dev del /dev/sdX1 /mnt/disks/yourpoolpath Replace X with correct identifier, note the 1 in the end Note that you can't go below the used profile minimum number of devices, i.e., you can't remove a device from a 2 device raid1 pool, you can convert it to single profile first and then remove the device, to convert to single use: btrfs balance start -f -dconvert=single -mconvert=single /mnt/disks/yourpoolpath Then remove the device normally like above. Replace a device: To replace a device from a pool (if you have enough ports to have both old and new devices connected simultaneously): You need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then type: btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/disks/yourpoolpath Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with: btrfs replace status /mnt/disks/yourpoolpath If the new device is larger you need to resize it to use all available capacity, you can do that with: btrfs fi resize X:max /mnt/disks/yourpoolpath Replace X with the correct devid, you can find that with: btrfs fi show /mnt/disks/yourpoolpath
    1 point