Leaderboard

Popular Content

Showing content with the highest reputation on 01/04/19 in Posts

  1. unRAID 6.4 introduces encrypted volumes and allows individual disks to be protected by encryption. The passphrase or keyfile which is used to unlock encryption is not permanently stored on the system itself, this means that upon a system start one has to re-enter the passphrase or re-select the keyfile. Until then the array can not start and the GUI will report a "missing key". But what if we want to auto-start the array and not keep a local copy on the system itself? It kind of defeats the protection scheme to store a local copy, i.e. when your system get stolen, you don't want to include the keys to allow access. The solution I have come up is to download the keyfile during startup from a different system in my local network which is in a different part of the house. In my case I am using an arbitrary .png file, but you could also store a passphrase and have this copied over. In my go file I added the following lines to download the keyfile from a remote SMB share before starting emhttp. # auto unlock array mkdir -p /unlock mount -t cifs -o user=name,password=password,iocharset=utf8 //192.168.1.123/index /unlock cp -f /unlock/#/some.png /root/keyfile umount /unlock rm -r /unlock Of course the remote share must be accessible when the system starts and surely this approach can be cracked, but it is definitely safer than just keeping a local copy of the keyfile on the unRAID server itself. Remember that once the array is started, the local keyfile may be deleted if you really don't want to keep the key while the system is operational. Perhaps my idea is useful for others wanting to do something similar
    1 point
  2. I was wanting to do GPU Hardware Acceleration with a Plex Docker but unRAID doesn't appear to have the drivers for the GPUs loaded. would be nice to have the option to install the drivers so the dockers could use them.
    1 point
  3. Extended SMART test will take many hours. I just meant the SMART reports which you can easily get by clicking on a disk and looking at its attributes, and you can page through all disks attributes with the arrows. Or just look at the Dashboard. If none of the disks is showing a yellow triangle warning then that means Unraid considers them to have good SMART.
    1 point
  4. You certainly can rebuild both at once, however, the safest option would be one at a time, with a correcting parity check showing zero errors after each replacement. Totally up to you how much risk you want to endure.
    1 point
  5. Perfectly correct, however some motherboards won't boot without a GPU, or need a BIOS option changed to boot without one. It's always best to have a GPU available to temporarily use to troubleshoot.
    1 point
  6. No. This is the needo / gfjardim support thread. But what are you going to do? Users will post where users post. If you really want to enforce things, then the SQL posts should never have been posted here at all, but rather on GitHub where Microsoft wants their support handled. Any question about say "How do I have Plex transcode in RAM" should be posted on the Plex forums, as its a general Plex specific question rather than a support question for the template maintainer. Where do you draw the line? Personally, I don't think that this thread should be locked. If it's still pinned though, then that should definitely get changed. All deprecated, which means that previous users can install them, but new users can't
    1 point
  7. the general place at the moment is in the Docker Engine section, i agree its not REALLY the right place but its a lot better than posting in a support thread from a guy who hasn't been here for 1.5 years. link:- https://forums.unraid.net/forum/58-docker-engine/ @jonp what do you reckon, a new docker specific general section, might break up the massive "general support" thread and make searching a bit easier.
    1 point
  8. Regarding: With the normal disks mounted within unraid itself you can enable/disable the monitoring of certain SMART values. The mounted disks via Unassigned Devices don't have this option, and seem to follow the default settings as set in "UnRaid: settings -> disk settings". Would it be possible to implement this same SMART attribute monitoring menu like in unraid where these can be toggled on/off ?
    1 point
  9. I believe that at some point Limetech intend to integrate the functionality of the UD plugin into the base Unraid release. Hopefully at that time support for disk specific settings for Unassigned Disks will become available as well?
    1 point
  10. CA user scripts can be used to do this.
    1 point
  11. They are two different views of User Shares. /mnt/user0 includes only those files that are on the array disks, while /mnt/user includes any files on the cache disks as well. All top level folders on are User Shares. Which disks are allowed to hold files for a particular share then you control this using the settings for that particular share. If you want the folders to end up only on the cache then set "Use Cache=Prefer" or (if they are already on the cache) "Use Cache:Only". I suspect you have it set to "Use Cache: Yes" which will result in files being moved from cache to array? Turning on the help in the GUI gives more information on how the different settings for Use Cache actually work.
    1 point
  12. A possible workaround is to set Settings->Disk Settings to handle the notifications that you want UD to use. You can then over-ride this for each Unraid disk where you want different settings. Not ideal, but probably better than nothing.
    1 point
  13. For each device you can enable/disable the specific SMART attributes which are monitored. On the Main page click on the device name and scroll to the SMART settings to make the necessary changes.
    1 point
  14. This array... is clean! [18421.678196] XFS (sdg1): Mounting V5 Filesystem [18421.702969] XFS (sdg1): Ending clean mount [18433.061212] mdcmd (236): set md_num_stripes 1280 [18433.061224] mdcmd (237): set md_sync_window 384 [18433.061232] mdcmd (238): set md_sync_thresh 192 [18433.061239] mdcmd (239): set md_write_method [18433.061248] mdcmd (240): set spinup_group 0 0 [18433.061257] mdcmd (241): set spinup_group 1 0 [18433.061265] mdcmd (242): set spinup_group 2 0 [18433.061274] mdcmd (243): set spinup_group 3 0 [18433.061282] mdcmd (244): set spinup_group 4 0 [18433.061290] mdcmd (245): set spinup_group 5 0 [18433.061298] mdcmd (246): set spinup_group 6 0 [18433.061306] mdcmd (247): set spinup_group 7 0 [18433.061314] mdcmd (248): set spinup_group 8 0 [18433.061322] mdcmd (249): set spinup_group 9 0 [18433.061330] mdcmd (250): set spinup_group 10 0 [18433.061338] mdcmd (251): set spinup_group 11 0 [18433.061346] mdcmd (252): set spinup_group 12 0 [18433.061355] mdcmd (253): set spinup_group 13 0 [18433.061363] mdcmd (254): set spinup_group 14 0 [18433.061371] mdcmd (255): set spinup_group 15 0 [18433.061388] mdcmd (256): set spinup_group 29 0 [18433.184487] mdcmd (257): start STOPPED [18433.184721] unraid: allocating 87420K for 1280 stripes (17 disks) [18433.206055] md1: running, size: 7814026532 blocks [18433.206317] md2: running, size: 3907018532 blocks [18433.206546] md3: running, size: 3907018532 blocks [18433.206787] md4: running, size: 3907018532 blocks [18433.207036] md5: running, size: 3907018532 blocks [18433.207294] md6: running, size: 3907018532 blocks [18433.207520] md7: running, size: 3907018532 blocks [18433.207743] md8: running, size: 3907018532 blocks [18433.207980] md9: running, size: 7814026532 blocks [18433.208195] md10: running, size: 11718885324 blocks [18433.208447] md11: running, size: 7814026532 blocks [18433.208663] md12: running, size: 2930266532 blocks [18433.208893] md13: running, size: 3907018532 blocks [18433.209121] md14: running, size: 3907018532 blocks [18433.209339] md15: running, size: 2930266532 blocks [18505.068952] XFS (md1): Mounting V5 Filesystem [18505.220978] XFS (md1): Ending clean mount [18505.241064] XFS (md2): Mounting V5 Filesystem [18505.420607] XFS (md2): Ending clean mount [18505.524083] XFS (md3): Mounting V5 Filesystem [18505.712850] XFS (md3): Ending clean mount [18505.807641] XFS (md4): Mounting V4 Filesystem [18505.990918] XFS (md4): Ending clean mount [18506.007166] XFS (md5): Mounting V5 Filesystem [18506.206230] XFS (md5): Ending clean mount [18506.276970] XFS (md6): Mounting V5 Filesystem [18506.462988] XFS (md6): Ending clean mount [18506.528073] XFS (md7): Mounting V4 Filesystem [18506.691736] XFS (md7): Ending clean mount [18506.735099] XFS (md8): Mounting V5 Filesystem [18507.017610] XFS (md8): Ending clean mount [18507.085893] XFS (md9): Mounting V5 Filesystem [18507.288553] XFS (md9): Ending clean mount [18507.393625] XFS (md10): Mounting V5 Filesystem [18507.577104] XFS (md10): Ending clean mount [18507.819136] XFS (md11): Mounting V5 Filesystem [18507.976554] XFS (md11): Ending clean mount [18508.106641] XFS (md12): Mounting V5 Filesystem [18508.341221] XFS (md12): Ending clean mount [18508.430243] XFS (md13): Mounting V5 Filesystem [18508.588536] XFS (md13): Ending clean mount [18508.660636] XFS (md14): Mounting V5 Filesystem [18508.805264] XFS (md14): Ending clean mount [18508.865881] XFS (md15): Mounting V5 Filesystem [18509.044894] XFS (md15): Ending clean mount [18509.134343] XFS (sdb1): Mounting V4 Filesystem [18509.288511] XFS (sdb1): Ending clean mount Quick final questions: 1. How can I report this to someone at Unraid so they can look at upgrading xfsprogs bundled with 6.7? 2. As there was a kernel panic and an unclear shutdown, its wanting to run a parity sync... there's a checkbox saying "Write corrections to parity", I assume that means take the disks as gospel and update the parity to match them? Parity fix running
    1 point
  15. Probably a good idea. Can you post the link here as one last interruption to the thread? With apologies to @SpaceInvaderOne
    1 point
  16. Yea. I will need testers shortly. I feel like I should create a separate thread for this so its not hijacking spaceinvader's Sent from my ONE E1003 using Tapatalk
    1 point
  17. I've put in quite a few hours to creating a tidy UI for the application. Once this is complete we should be able to start adding much better control over the settings. UI and code tidy up should be complete with 5 or so more hours of coding (probably tomorrow if time permits). See the attached screenshots for an idea of what it will look like. You can see that I have implemented the "worker" approach. In the screenshot I have too workers that are passed files to re encode. Once complete they are logged in the "recent activities" list. Currently unsure about the server stats. That may not be complete by the time I push this to dockerhub. But I think it will be a nice idea to see what sort of load you are putting on the server.
    1 point
  18. Wrong thread. This is for qBittorrent , not rTorrent. Here is that support thread: I made a PR for this: https://github.com/linuxserver/docker-qbittorrent/pull/38
    1 point
  19. Hi everyone. I'm new to unraid and somewhat new to docker and containers. I have a 4 port Intel nic in my unraid box that I would like to leverage and hoping to get some input / direction on what I'm wanting to do. 1) i would like for unraid to remain on the current NIC (on the MB) 2) going to install pfsense to act as my vm firewall on eth1 and eth2 3) going to build VM lab that will remain in a network inside eth2. In virutal box my machines the networking of the guests are set to "internal network" so I would like to mimic that if i can. 4) would like to create docker networks for eth3 and eth4 eth3 would cover the internal docker containers. eth4 would cover the external facing docker containers. Seems there was something called pipework that could do this but seems that information is old and either dock / unraid supports what I want to do now naively. just unsure how to do it. Hoping the community here can point me in the right direction.
    1 point
  20. That was speedy, thank you very much! Hopefully this helps with all the annoying illumination change recordings..
    1 point
  21. I have pinned the Update Notes thread since it had long ago scrolled off the first page. Still seeing a number of people coming from some version older than 6.4 to the latest and greatest without bothering to read the release thread.
    1 point
  22. Why not run the openvpn server built in to pfSense?
    1 point