Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 12/18/18 in all areas

  1. 7 points
    Hey there Unraid community! Just wanted to take a moment between all the present opening, food eating, and carrol singing to wish everyone here happy holidays! Thank you all for being so awesome!! All the best, Team LT (Tom, Jon, and Eric) Sent from my Pixel 3 XL using Tapatalk
  2. 4 points
    I was wanting to do GPU Hardware Acceleration with a Plex Docker but unRAID doesn't appear to have the drivers for the GPUs loaded. would be nice to have the option to install the drivers so the dockers could use them.
  3. 4 points
    Support for Nginx Proxy Manager docker container Application Name: Nginx Proxy Manager Application Site: https://nginxproxymanager.jc21.com Docker Hub: https://hub.docker.com/r/jlesage/nginx-proxy-manager/ Github: https://github.com/jlesage/docker-nginx-proxy-manager Make sure to look at the complete documentation, available on Github ! Post any questions or issues relating to this docker in this thread.
  4. 4 points
    I've put in quite a few hours to creating a tidy UI for the application. Once this is complete we should be able to start adding much better control over the settings. UI and code tidy up should be complete with 5 or so more hours of coding (probably tomorrow if time permits). See the attached screenshots for an idea of what it will look like. You can see that I have implemented the "worker" approach. In the screenshot I have too workers that are passed files to re encode. Once complete they are logged in the "recent activities" list. Currently unsure about the server stats. That may not be complete by the time I push this to dockerhub. But I think it will be a nice idea to see what sort of load you are putting on the server.
  5. 4 points
    Added to Unraid 6.7
  6. 3 points
    MAN I KNOW RIGHT! I always want to create new accounts on boards i've never posted on and ask questions that are crazily discussed at holiday parties! It's like the other day, for the second night of Hanukah, me and Ishmael were having some friends over, and some guy was like "if I was going to do 10gbe in my house, I would go fiber, i don't care about the cable expense!" But this other guy was all "MAN, you gotta go copper. It's a little pricey now for the cards, but it's the future!" So after that, I went and created an account on the synology forums and posted: Do you use 10gbe? I have tried several 10gbe hardware providers like Intel, Quanta and as of lately, Mellanox. Would rank Mellanox the highest because it has the best quality for a very low price, really impressed with the provider. What do you use? Totally legit. Welcome to the forums.
  7. 3 points
    Scheduled for Unraid OS 6.7 release.
  8. 2 points
    Notice: You must be running unRAID version 6.1 or later to use these plugins The easiest way of installing plugins is thru Community Applications. This is an apps installation manager developed by Squid and needs to be installed separately. The alternative way of installing an optional plugin is from the Plugin page in the WebGui and use the tab Install Pugin. The URLs of the optional plugins as mentioned below can be copied and pasted in the install box. Available Dynamix plugins Active Streams shows in real-time any open SMB and AFP network streams. This allows instant view of who is accessing the server - either by IP address or name - and see what content is opened. Optionally streams can be stopped from the GUI. Cache Dirs keeps folder information in memory to prevent unnecessary disk spin up. Dynamix builds a GUI front-end to allow entering of parameters for the cache_dirs script which is running in the background. S3 Sleep defines the conditions under which the system will go to S3 sleep mode. It also adds an unconditional 'sleep' button on the Array Operation page. System Info shows various details of your system hardware and BIOS. This includes processor, memory and sub-system components. System Stats shows in real-time the disk utilizations and critical system recources, such CPU usage, memory usage, interface bandwidth and disk I/O bandwidth. System Temp shows in real-time the temperature of the system CPU and motherboard. Temperatures can be displayed in Celsius or Fahrenheit. Your hardware must support the necessary probes, and additional software drivers may be required too. This plugin requires PERL, this package needs to be installed separately. System AutoFan allows automatic fan control based on the system temperature. High and low thresholds are used to speed up or speed down the fan. This is a new plugin and still under development. Schedules is a front-end utility for the built-in hourly, daily, weekly and monthly schedules. It allows the user to alter the schedule execution times using the GUI. See Settings -> Scheduler -> Fixed Schedules. System Buttons adds an one-click button to the header which allows for instant sleep, reboot, shutdown of the system or array start/stop. Local Master supports detection of the local master browser in an SMB network. It will display an icon in the header at the top-right when unRAID is elected as local master browser. Under SMB Workgroup settings more information about the current elected local master browser is given. SSD TRIM allows the creation of a cronjob to do regular SSD TRIM operations on the cache device(s). The command 'fstrim -v /mnt/cache' is executed at the given interval. File Integrity Real-time hashing and verification of files stored on the data disks of the array. This plugin reports on failed file content integrity and detects silent file corruption (aka bit-rot). WARNING: USING THIS PLUGIN ON DISKS FORMATTED IN REISERFS MAY LEAD TO SYSTEM INSTABILITY. IT IS RECOMMENDED TO USE XFS. SCSI Devices (unRAID 6.2 or later) updates the udev persistent storage devices rules file (courtesy of bubbaQ), which allows proper naming of SCSI attached disks. Please by aware that after installation of this plugin, it might be necessary to re-assign disks due to their changed names! Date Time (unRAID 6.2 or later) adds an interactive world map to the date and time settings. This allows the user to simply click on his/her country and select the corresponding time zone. In addition the world map highlights the countries in the currently selected time zone. Stop Shell (unRAID 6.4 or later) adds a script which gets invoked when the array is stopped. This script looks for any open shells in /mnt/... and terminate them. This ensures the array can be stopped. Be aware that automatic terminating of open shells may lead to data loss if an active process is writing to the array. Day Night (unRAID 6.5 or later) automatically toggles between a day theme and a night theme. Based on the sunrise and sunset times of your location. Installation URLs (copy & paste) Active Streams - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.active.streams.plg Cache Dirs - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.cache.dirs.plg S3 Sleep - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.s3.sleep.plg System Info - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.system.info.plg System Stats - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.system.stats.plg System Temp - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.system.temp.plg System AutoFan - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.system.autofan.plg Schedules - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.schedules.plg System Buttons - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.system.buttons.plg Local Master - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.local.master.plg SSD TRIM - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.ssd.trim.plg File Integrity - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.file.integrity.plg SCSI Devices - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.scsi.devices.plg Date Time - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.date.time.plg Stop Shell - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.stop.shell.plg Day Night - https://raw.github.com/bergware/dynamix/master/unRAIDv6/dynamix.day.night.plg gridrunner aka Spaceinvader One has made a very nice video tutorial about several Dynamix plugins. A recommended watch when you like to learn more. You like my Dynamix plugins?
  9. 2 points
    This array... is clean! [18421.678196] XFS (sdg1): Mounting V5 Filesystem [18421.702969] XFS (sdg1): Ending clean mount [18433.061212] mdcmd (236): set md_num_stripes 1280 [18433.061224] mdcmd (237): set md_sync_window 384 [18433.061232] mdcmd (238): set md_sync_thresh 192 [18433.061239] mdcmd (239): set md_write_method [18433.061248] mdcmd (240): set spinup_group 0 0 [18433.061257] mdcmd (241): set spinup_group 1 0 [18433.061265] mdcmd (242): set spinup_group 2 0 [18433.061274] mdcmd (243): set spinup_group 3 0 [18433.061282] mdcmd (244): set spinup_group 4 0 [18433.061290] mdcmd (245): set spinup_group 5 0 [18433.061298] mdcmd (246): set spinup_group 6 0 [18433.061306] mdcmd (247): set spinup_group 7 0 [18433.061314] mdcmd (248): set spinup_group 8 0 [18433.061322] mdcmd (249): set spinup_group 9 0 [18433.061330] mdcmd (250): set spinup_group 10 0 [18433.061338] mdcmd (251): set spinup_group 11 0 [18433.061346] mdcmd (252): set spinup_group 12 0 [18433.061355] mdcmd (253): set spinup_group 13 0 [18433.061363] mdcmd (254): set spinup_group 14 0 [18433.061371] mdcmd (255): set spinup_group 15 0 [18433.061388] mdcmd (256): set spinup_group 29 0 [18433.184487] mdcmd (257): start STOPPED [18433.184721] unraid: allocating 87420K for 1280 stripes (17 disks) [18433.206055] md1: running, size: 7814026532 blocks [18433.206317] md2: running, size: 3907018532 blocks [18433.206546] md3: running, size: 3907018532 blocks [18433.206787] md4: running, size: 3907018532 blocks [18433.207036] md5: running, size: 3907018532 blocks [18433.207294] md6: running, size: 3907018532 blocks [18433.207520] md7: running, size: 3907018532 blocks [18433.207743] md8: running, size: 3907018532 blocks [18433.207980] md9: running, size: 7814026532 blocks [18433.208195] md10: running, size: 11718885324 blocks [18433.208447] md11: running, size: 7814026532 blocks [18433.208663] md12: running, size: 2930266532 blocks [18433.208893] md13: running, size: 3907018532 blocks [18433.209121] md14: running, size: 3907018532 blocks [18433.209339] md15: running, size: 2930266532 blocks [18505.068952] XFS (md1): Mounting V5 Filesystem [18505.220978] XFS (md1): Ending clean mount [18505.241064] XFS (md2): Mounting V5 Filesystem [18505.420607] XFS (md2): Ending clean mount [18505.524083] XFS (md3): Mounting V5 Filesystem [18505.712850] XFS (md3): Ending clean mount [18505.807641] XFS (md4): Mounting V4 Filesystem [18505.990918] XFS (md4): Ending clean mount [18506.007166] XFS (md5): Mounting V5 Filesystem [18506.206230] XFS (md5): Ending clean mount [18506.276970] XFS (md6): Mounting V5 Filesystem [18506.462988] XFS (md6): Ending clean mount [18506.528073] XFS (md7): Mounting V4 Filesystem [18506.691736] XFS (md7): Ending clean mount [18506.735099] XFS (md8): Mounting V5 Filesystem [18507.017610] XFS (md8): Ending clean mount [18507.085893] XFS (md9): Mounting V5 Filesystem [18507.288553] XFS (md9): Ending clean mount [18507.393625] XFS (md10): Mounting V5 Filesystem [18507.577104] XFS (md10): Ending clean mount [18507.819136] XFS (md11): Mounting V5 Filesystem [18507.976554] XFS (md11): Ending clean mount [18508.106641] XFS (md12): Mounting V5 Filesystem [18508.341221] XFS (md12): Ending clean mount [18508.430243] XFS (md13): Mounting V5 Filesystem [18508.588536] XFS (md13): Ending clean mount [18508.660636] XFS (md14): Mounting V5 Filesystem [18508.805264] XFS (md14): Ending clean mount [18508.865881] XFS (md15): Mounting V5 Filesystem [18509.044894] XFS (md15): Ending clean mount [18509.134343] XFS (sdb1): Mounting V4 Filesystem [18509.288511] XFS (sdb1): Ending clean mount Quick final questions: 1. How can I report this to someone at Unraid so they can look at upgrading xfsprogs bundled with 6.7? 2. As there was a kernel panic and an unclear shutdown, its wanting to run a parity sync... there's a checkbox saying "Write corrections to parity", I assume that means take the disks as gospel and update the parity to match them? Parity fix running
  10. 2 points
    Yea. I will need testers shortly. I feel like I should create a separate thread for this so its not hijacking spaceinvader's Sent from my ONE E1003 using Tapatalk
  11. 2 points
    VPN providers know you are using a VPN & can manipulate your data, log your DNS queries & retain it as long as they wish, as on a VPS you can utilize it as a multifunctional server & one of these services installed is a VPS with the option of hardening the security. In addition you almost never get the full speed your ISP modem is capable of with any reputable VPN provider & the encryption set to a minimal as the higher the encryption the more CPU usage will be. On my server I utilize DNSCrypt in conjunction with Unbound & with unbound you have multiple option of hardening your DNS, that VPN link is connected to my pfSense & here I have all my DNS queries sent over HTTPS (CloudFlare, QuadDNS, OpenDNS), my encryption algo is set to AES-256 there are more options on the pfSense box that will be lengthy to mention. The VPS I use has a (Allow IRC Servers, VPN, Torrents, Free DMCA) policy.
  12. 2 points
    The last time I checked, most of the SCSI changes were implemented in 4.19. I haven't done a full 4.19 vs 4.20 breakdown in the SCSI and FS areas/modules to see what additional changes were implemented. If 6.7 drops with 4.19, we "should" be good. If 6.7 comes with a Slackware re-baseline, even better, as there are several updated packages that would compliment the improvements. The other aspect I have been becoming familiar with is UNMAP. Similar to FSTRIM, it provides instructions to the PCI bus to perform certain actions. Again, learning as time permits. Nevertheless, it seems the SCSI community acknowledged the collapse of several modules and programming language/library optimization has effected several functionalities in the HBA world. I'm really hoping it all comes to bed at 4.19 or 4.20. Again, fingers crossed.
  13. 2 points
    It is included in the upcoming Unraid version 6.7
  14. 2 points
    better storage setup ability to add more nifty things via docker ability to run more vm's including a firewall your coolness factor increases by a factor of 5 virtual 10gbe connection to server from windows vm vs buying 10gbe hardware. ability to divide resources and not waste them if someone watches plex while you're gaming on a standalone computer, your game can suffer vs on unraid you can isolate them from each other did i mention your coolness factor increases by a factor of 5?
  15. 2 points
    Only real change on this release is that there is now a backup application feed server running (thank you @limetech @eschultz @jonp) In the event that the primary server cannot be reached, the feed will download instead from the secondary server, and failing that a USB stored feed (if present - ask for details) will be utilized instead. Oh yeah, happy ho-ho.
  16. 2 points
    Although I have a lot of sympathy for this view but in my mind the biggest objection I have is that it is not easily ‘discoverable’. I was thinking that it should appear on the Shares tab, but under the Disk Shares section (regardless of whether disk shares are enabled or not), not the User Shares section.
  17. 2 points
    Well I don't know what changed, but I tried it again today and it worked fine! Thanks for this patch!
  18. 2 points
    Looks like you have quite the variety of drives, so that complicates things. Here's how I would proceed. 1) Build a new array, ultimately your goal will be a solid, reliable array. Don't reuse any of your old drives since we are going to try an extract data. Note that with this method of recovery, I don't think you can rely on any drive giving you back 100%, so if you have to do a rebuild on any given drive (assuming you fully recover that many drives), I don't know how reliable your rebuilt drive would be, either. You're welcome to try it. If not, maybe start with 1 Parity and 1 Data and work your way up from there. I'm assuming you know which of the old drives are data and which were parity. This method recovers the data treating the old array drives as JBOD. 2) Take say the STBV5000100, buy another exact model drive. Last time, I bought a used working drive off E-Bay. Test the newer drive, make sure it works and is reliable. Replace the bad drive's board with the new drive's board. Plug the bad drive into the server and use something like Unassigned Devices to mount it, then see how much data you can copy off of it. Once you have extracted as much data as you can, unmount and remove your bad drive. Swap the controller boards back. Bad drive goes on the shelf in case you need it for further recovery. The newer drive can be pre-cleared and added to the array. Repeat this step for all drives. Something I heard was that reallocated sectors are recorded somewhere on the controller board. I heard it quite a long time ago, so I don't know if it is/was true. If it is, your recovery may involve accessing some incorrect sectors, which is why I think the data isn't guaranteed to be 100%, but again anything is better than 0%. This should also be non-destructive, so you could still use other methods to recover your data if you like. I have not heard of the diode fix, nor have I ever attempted to alter a controller board in any way. All I have done is a straight board swap, and hope that any data losses are livable. Thankfully, this isn't something that I have had to do regularly, but it has worked once or twice. PS: Dunno about the warranty, but I'd skip the soldering iron if you intend to go this route.
  19. 2 points
    Ah-ha! I may have found the problem: If I change the Primary vDisk Location from Auto to Manual, I can then Update the VM. But when I Edit it again, the Primary vDisk Location reverts to Auto.
  20. 2 points
    I am sorry I am going to have to skip this update. I just can't... Any ETA on 6.6.7?
  21. 2 points
  22. 1 point
    I would use a 9207-8i instead, since it's PCIe 3.0 and if you use all bays with fast disks you can use the extra bandwidth, this would get you around 185MB/s per disk with all slots occupied, as for cabling, one cable from the HBA to each backplane, no need for cascading,
  23. 1 point
    You can make a backup of its contents by going to Main - Boot Device - Flash - Flash Device Settings and clicking on the Flash backup button. Then, see here: https://lime-technology.com/replace-key/
  24. 1 point
    WiFi is quite often fast enough, which is why lots of media players can be run over WiFi. And it allows a backup server to be placed in a room that doesn't have wired networking. The important thing is that different users have different usage cases and different needs.
  25. 1 point
    Solved File Upload Size Limitation I had been fiddling with the LSIO's letsencrypt container to make it work as a reverse proxy for LSIO's Nextcloud. The reverse proxy works, but file uploads are limited to 10MB. The solution is to edit the file proxy.conf which for me resides in /mnt/cache/appdata/letsencrypt_lsio/nginx. The first line in that file is: client_max_body_size 10m; Change to: client_max_body_size 0; #This turns off checking and everything works.