Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 06/20/19 in Posts

  1. 13 points
    To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. This is a bug fix and security update release. Due to another set of processor vulnerabilities called Zombieland, and a set of TCP denial-of-service vulnerabilities called SACK panic, all users are encouraged to update. We are also still trying to track down the source of SQLite Database corruption. It will also be very helpful for those affected by this issue to also upgrade to this release. Version 6.7.1 2019-06-22 Base distro: btrfs-progs: version 5.1.1 curl: version 7.65.1 (CVE-2019-5435, CVE-2019-5436) dhcpcd: version 7.2.2 docker: version 18.09.6 kernel-firmware: version 20190607_1884732 mozilla-firefox: version 66.0.5 openssl: version 1.1.1c openssl-solibs: version 1.1.1c php: version 7.2.19 (removed sqlite support) samba: version 4.9.8 (CVE-2018-16860) xfsprogs: version 5.0.0 Linux kernel: version: 4.19.55 (CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091, CVE-2019-11833, CVE-2019-11477, CVE-2019-11478, CVE-2019-11479) intel-microcode: version 20190618 Management: shfs: support FUSE use_ino option Dashboard: added draggable fields in table Dashboard: added custom case image selection Dashboard: enhanced sorting Docker + VM: enhanced sorting Docker: disable button "Update All" instead of hiding it when no updates are available Fix OS update banner overhanging in Auzre / Gray themes Do not allow plugin updates to same version misc style corrections
  2. 7 points
  3. 7 points
    Can you promote SpaceInvaderOne? He's the only reason I use Unraid.
  4. 5 points
    To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. We restored PHP/SQLite support and updated the kernel-firmware package. We are also still trying to track down the source of SQLite Database corruption. It will also be very helpful for those affected by this issue to also upgrade to this release. If you have issues with SQLite DB corruption, please post in this separate Bug Report. Version 6.7.2 2019-06-25 Base distro: kernel-firmware: version 20190620_7ae3a09 php version: 7.2.19 (restore sqlite support) sqlite version: 3.28.0 Linux kernel: version: 4.19.56
  5. 5 points
    I have made un updated video guide for setting up this great container. It covers setting up the container, port forwarding and setting up clients on Windows, macOS Linux (ubuntu Mate) and on cell phone - Android and IOS. Hope this guide helps people new to this setting up OpenVPN
  6. 4 points
    Currently unRAID uses basic auth to enter credentials for the web gui, but many password managers don't support this. Would be great if we could get a proper login page. Examples This kind of login page always works with password managers. This does not
  7. 4 points
    It's got a big LinuxServer logo at the top of the thread and lots of red warnings, and the first two posts go into some detail about how it works. The plugin is installed from Community Applications, with an author of LinuxServer.io How much clearer can we make it? Sent from my Mi A1 using Tapatalk
  8. 4 points
    Create a separate topic with a proper report of your problem and attach diagnostics. Posting a 3-line complaint on every release topic will not get you the necessary help.
  9. 4 points
    Sounds like you are attending a meeting. Stay strong.
  10. 4 points
    Today's updates to the mover tuning plugin allows you to do that (CPU and I/O priority)
  11. 4 points
  12. 4 points
    Damn, I do follow that repo and also the slackware64-current package list. What happened with this release, was that it was actually built and tested a couple days ago when 4.19.53 was released. Then noticed kernel was up to .55 very quickly and in looking at change log there is a single change (probably related to 'sack panic' changes) - and any time Greg K-H creates another patch release with a single change, it's probably important 😳 So I did a quick kernel update and rebuilt this 6.7.1 release & didn't notice that firmware update from yesterday - yeah this stuff is changing rapidly these days! How important is the f/w update to you? I can generate a 6.7.2 pretty quickly if so...
  13. 4 points
    Not sure if I'm disappointed or not. I had it in my head (based upon the avatar) that Tom wore Hawaiian shirts all the time.
  14. 3 points
    Attached is a debugged version of this script modified by me. I've eliminated almost all of the extraneous echo calls. Many of the block outputs have been replaced with heredocs. All of the references to md_write_limit have either been commented out or removed outright. All legacy command substitution has been replaced with modern command substitution The script locates mdcmd on it's own. https://paste.ee/p/wcwWV
  15. 3 points
    Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/drils If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/drils You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/driValue: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  16. 3 points
    Looks like the card was touching the metal surrounding the slot.
  17. 3 points
  18. 3 points
    Do you want to stay informed about all things Unraid? Sign up for our monthly newsletter and you'll receive a concise digest of new blog posts, popular forum posts, product/company announcements, videos and more! https://unraid.net/blog/unraid-monthly-digest Cheers
  19. 3 points
    To the developers, to everyone in this community involved with getting unRAID where it stands today! I guess unRAID is written differently these days. I remember starting with unRAID Server Plus almost 10 years ago, the days Tom Mortensen thanked you in person for buying a license, no criticism here of course . For some reasons, now still very obscure to my mind (getting old), I installed one of the first versions of Windows Home Server. After that it got killed by MS I went with an Apple Server, great idea (NOT) and switched to Windows 10 with Stablebit Drivepool for a couple of years now. Somehow never looked back although I never really liked it a lot, way too clean and lean for an ex-Unix admin... Until recently. I rediscovered unRAID and I must say that I was baffled about the software, the plugins, the dockers, the whole setup and especially about the supporting community that arose around this beauty. Moved 15TB of media through the Unassigned Devices plugin to a newly built array over the weekend, parity build is now ongoing. Happy! Thanks! Mike
  20. 3 points
    Hey everyone, just thought I'd put this up here after reading a syslog by another forum member and realizing a repeating pattern I've seen here where folks decide to let Plex create temporary files for transcoding on an array or cache device instead of in RAM. Why should I move transcoding into RAM? What do I gain? In short, transcoding is both CPU and IO intensive. Many write operations occur to the storage medium used for transcoding, and when using an SSD specifically, this can cause unnecessary wear and tear that would lead to SSD burnouts happening more quickly than is necessary. By moving transcoding to RAM, you alleviate the burden from your non-volatile storage devices. RAM isn't subject to "burn out" from usage like an SSD would be, and transcoding doesn't need nearly as much space in memory to perform as some would think. How much RAM do I need for this? A single stream of video content transcoded to 12mbps on my test system took up 430MB on the root ram filesystem. The quality of the source content shouldn't matter, only the bitrate to which you are transcoding. In addition, there are other settings you can tweak to transcoding that would impact this number including how many second of transcoding should occur in advance of being played. Bottom line: If you have 4GB or less of total RAM on your system, you may have to tweak settings based on how many different streams you intend on transcoding simultaneously. If you have 8GB or more, you are probably in the safe zone, but obviously the more RAM you use in general, the less space will be available for transcoding. How do I do this There are two tweaks to be made in order to move your transcoding into RAM. One is to the Docker Container you are running and the other is a setting from within the Plex web client itself. Step 1: Changing your Plex Container Properties From within the webGui, click on "Docker" and click on the name of the PlexMediaServer container. From here, add a new volume mapping: /transcode to /tmp Click "Apply" and the container will be started with the new mapping. Step 2: Changing the Plex Media Server to use the new transcode directory Connect to the Plex web interface from a browser (e.g. http://tower:32400/web). From there, click the wrench in the top right corner of the interface to get to settings. Now click the "Server" tab at the top of this page. On the left, you should see a setting called "Transcoder." Clicking on that and then clicking the "Show Advanced" button will reveal the magical setting that let's you redirect the transcoding directory. Type "/transcode" in there and click apply and you're all set. You can tweak some of the other settings if desired to see if that improves your media streaming experience. Thanks for reading and enjoy!
  21. 3 points
    Finally got around to updating to 6.7.2 from 6.7.0 (I skipped 6.7.1). So far, I have seen zero issues. System boots fine, WebGUI is normal, all plugins and dockers are functioning as expected, all disks present, etc. I have to say that has almost always been my experience since I started with unRAID back with version 5.0 beta14 (as prereleases used to be called). I have had the same flash drive through almost all those releases and the server has been through several motherboard, CPU and RAM changes as well. The only problems I ever had related to a specific release of unRAID were eventually resolved with BIOS and Linux kernel updates and that has only happened once in each case.; both times with interim workarounds. I know those who have problems, especially when those problems occur right after an unRAID OS update, may be inclined to believe something was "broken" in unRAID and can't understand how that got through testing. Keep in mind that most issues end up being related to specific hardware, USB drives, plugins, configurations, kernel revisions, BIOS revisions, id10t errors, etc. There is no way Limetech can test every possible combination. In fact, the issue might have pre-dated the unRAID OS update and was only revealed because the update required a server reboot which touches all of the above mentioned pieces of the puzzle. The swiftness with which Limetech, community developers and the community in general responds to even the most obscure edge cases is truly remarkable when compared to other OS/App vendors. With the rapid pace of kernel releases, CVEs, security mitigations and everything else lately, I am giving Limetech a huge virtual pat on the back for keeping up with it all as quickly as they do.
  22. 3 points
    It's sad that people have now made this awesome hard work become a post worthy of: https://old.reddit.com/r/ChoosingBeggars/ Please remember before this came out, we had to buy expensive large core processors AND use more power to do what we are doing today. Thank you again for all your hard work. Without it we are back to what we had before which is nothing. *I hope your family member is doing well. No amount of software/Computer work is worth not having those memories of said family members.*
  23. 3 points
    No, let's be honest here, the end user needs to take some responsibility with this one. LimeTech publishes an update, user knows that they're using an Nvidia or DVB build then you got to wait till it's done until you upgrade. As soon as I build them I post in the support thread so set up an email alert. Why do we need to do more and more work to essentially spoon feed? Come on guys, really? Sent from my Mi A1 using Tapatalk
  24. 3 points
    I do not think we should add anything messing with the update check for unraid. That is just asking for trouble.
  25. 3 points
    You guys please settle down. The fact that sqlite support was in php in the first place was an oversight. We never intended that a plugin would utilize a database application and honestly, I didn't know about this one. Seems to me using sqlite for something like this is a bit overkill ... but oh well, we'll add it back once we figure out wtf is wrong with plex/sqlite db corruption. This is correct. Also correct it should have nothing to do with the issue. But seems like it's more than just "a few users" though which is why we're tying what we can, as fast as we can, to get to the bottom of it. Please no more beating. Sure in an ideal world it maybe should have been a beta/rc/whatever but there are other significant security patches which need to be distributed to the user base. As for breaking this plugin, all I can say is "oops".
  26. 3 points
    For tehuti in this 6.7.1 release we're using tn40xx-0.3.6.17 for 6.8 beta we're using tn40xx-0.3.6.17.2 - which only change supposedly is to compile against 5.x Linux kernel. For intel in 6.7.1 release, correct: 5.5.5 for Unraid 6.8 beta - we still use stock ixgbe drivers because still Intel has not released build for Linux 5.x kernel.
  27. 3 points
    It got pushed back but it looks like the necessary changes to rclone union allowing unionfs to be dropped will be in the next release - 1.4.9 https://github.com/ncw/rclone/milestone/36
  28. 3 points
    There is another topic on the subject but I'm giving up on that one: We got to the point where OP was convinced that appdata/plex residing on a single-disk btrfs cache device and mapped using path: /mnt/cache/appdata/plex was stable. What would be helpful is a) others to confirm, and then b) after confirming this exact config is stable, simply change path to: /mnt/user/appdata/plex This will tell me if simply passing I/O through 'shfs' layer is introducing this issue. If it starts to fail, then there are other tests to try.
  29. 3 points
    I've bumped Unmanic to 0.0.1-beta5 This includes the following changes: Modify some historical logging of useful stats This sets us up to start adding extra info like eta during the conversion process as well as the stats mentioned below Adds new "See All Records" screen Any suggestions for stats that you would like on this screen would be great Note that due to the changes in logging, only newly converted items will show here. Old stuff wont due to missing statistics data. Sorry Create backups of settings when saving (there were some cases where the settings were invalid but still saved). This corrupted our data and made it impossible to read. So now we test prior to committing changes. FFMPEG was causing some errors on certain files. If you have noted any conversion failures in the past, can you please re-test with this version to confirm if it is now resolved or not. Log rotation If you are debugging, you are spewing a crap ton of data to the logs. This update rotates the logs at midnight every day and keeps them for 7 days. Even if you are not debugging, this is much better. The next milestone is to add extended functionality to the settings: https://github.com/Josh5/unmanic/milestone/4 This will hopefully be the last major tidy up of core functionality. I think that once this milestone is complete we can safely pull this out of beta and look at things like HW decoding and improving on data that is displayed throughout the WebUI.
  30. 3 points
    I would to know if Unraid has any plans to release an official API.
  31. 2 points
    Summary: Support Thread for ich777 Gameserver Dockers (CounterStrike: Source & ConterStrike: GO, TeamFortress 2, ArmA III,... - complete list in the second post) Application: SteamCMD DockerHub: https://hub.docker.com/r/ich777/steamcmd DonationLink: https://www.paypal.me/chips777 All dockers are easy to set up and are highly customizable, all dockers are tested with the standard configuration (port forwarding,...) if the are reachable and show up in the server list form the "outside". The standard password for the gameservers if enabled is: Docker Please read the discription of each docker and the variables that you install (some dockers need special variables to run). If you like my work please consider Donating for further requests of game server where i don't own the game. Created a Steam Group: https://steamcommunity.com/groups/dockersforunraid
  32. 2 points
    Have you enabled destructive mode in UD settings Sent from my NSA monitored device
  33. 2 points
    Quick update: We received our first "royalty" check from Zazzle, our merch shop and have donated the proceeds to The Ocean Cleanup. Full details are in our blog. https://unraid.net/blog/merch-shop Thanks again to all who participated in our license giveaway and to all reppin' Unraid with some merch. Cheers
  34. 2 points
    Or if you only use the card for passthrough to a VM.
  35. 2 points
    Not useless at all when you don't own any nvidia hardware.
  36. 2 points
    On a casual note, do you have a favorite beer recipe, "Tom's Brew," or do you tend to experiment, change it up from batch to batch?
  37. 2 points
    Just a note for those with issues, latest radarr updates fixes problems between radarr and deluge. I personally don't use it, but just set it up as a test and worked like a charm
  38. 2 points
  39. 2 points
    Oh dear. My prayers for you and your family, mate.
  40. 2 points
    It's easy to miss nuance of mood in forum posts, both in people "complaining" and in people responding to "complaints". Everyone should try and keep that in mind. The fix will be available soon, let's just give this a rest please.
  41. 2 points
    None of this little discussion belongs in this thread, but it is what it is. But, every single user of CA has had to acknowledge this:
  42. 2 points
    This normally means that the upgrade did not get written correctly to the USB stick. Easiest solution is to download the ZIP file version from the Unraid site and extract all the bz* type files overwriting the ones on the USB stick.
  43. 2 points
    yeah, I've been keeping hourly backups of my database, makes it easier to restore to where I need to. just script copying it hourly to a different folder, works fine. I just have been running this command hourly (using user scripts addon). Obviously you'll need to create the dbbackups folder. tar zcvf /mnt/cache/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/dbbackups/com.plexapp.plugins.library.db-$(date +%A-%H%M).tar.gz /mnt/cache/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Plug-in\ Support/Databases/com.plexapp.plugins.library.db
  44. 2 points
    I have the same issue in TVheadend with unraid 6.7. Transcoding not working too. I'm using one J4015M board and i lost transcoding after update from Unraid 6.6.6 to 6.7 ( 6.7.1 RC1 / 6.7.2 Rc2 have the same issue.), i reverted back to 6.6.6 and it are working again. So is confirmed that's an Unraid issue, not from Docker conteiners.( TVH and Plex) How cen we try find what are causing it? Best Regards, psycmos
  45. 2 points
    Yes, a new Docker container image is now available
  46. 2 points
    Q1: Are 2.5' HDD any good in a NAS, longetivity wise? 2.5" HDD maxes out at 5TB (and that is the 15mm thickness, some cases don't let you mount 15mm 2.5") while 3.5" maxes out at 14TB currently. That means you will have to use 3x as many 2.5" for the same capacity, likely means needing additional SATA controller, which adds potential points of failure. So longevity isn't more or less a concern vs 3.5" but it's just more troublesome. Q2: Is ECC memory absolutely necessary? What will happen if I don't have them? It's only necessary if your hardware demands it (e.g. some sever-grade motherboard/CPU only works with ECC). Q3: Are the upcoming QLC SSD any good, longevity wise? Nobody knows! Longevity needs time to test; however, SSD generally will last longer than HDD simply for not having any moving parts. QLC to SSD is like SMR for HDD i.e. reducing price for higher storage capacity at the cost of lower performance. On some workloads I tested, a TLC SATA SSD is faster than the QLC Intel 660p NVMe. Emphasis on "some". Q4: How powerful should the PSU be for a 10-drive setup? Go to pcpartpicker, enter your hardware and see the power estimate. Then add 20% just to be safe. Q5: Are the following tasks achievable via unRAID? (ordered in priority) Yes but your hardware might be slightly underpower. It's recommended to reserve core 0 for Unraid tasks, at least 1 core for dockers, leaving only 2 left for your gaming VM. Not the end of the world but for CPU-bound games, you will be able to tell the diff.
  47. 2 points
    Great question. See below! Exactly. Just to add to this and where I think some people may get confused, port forwarding to your OpenVPN port isn't the same as, say, port forwarding port 80 (web traffic) to your unRAID/tower page. Now that would be VERY insecure. In that situation, anyone who visited your IP address would be sent straight to your tower admin page. On a high level, OpenVPN in contrast only has the one port forwarded to it and the connection attempt has to use a pre-generated key known only to the server and your workstation where the certificate is installed. This allows you to sit on your network as just another client, no matter where you are in the world, without exposure to the internet. As an extra precaution, my unRAID server sits in a DMZ behind a hardware firewall, isolated from my home network. If you (or anyone else) would be interested in that, I could write up another article on setting up a DMZ and a hardware firewall. It can be done surprisingly well on a very modest budget (<$200). -Torquewrench
  48. 2 points
    With Unraid 6.7.0 you shouldn't need i915.alpha_support=1 in your syslinux.cfg file. The i915 driver has for a while directly supported up to 8th generation Coffee Lake. It doesn't yet support 9th generation Coffee Lake though. Beyond that I'm not sure what would prevent it from working?
  49. 2 points
    Do you see any advanced virtualization tools coming to unraid such as VM cloning, snapshots, IP management of VM’s, oversee other VM’s of other unraid machines, and managing those?
  50. 2 points
    For anyone who needs the script here it is: #!/bin/bash # Drives declare -a StringArray=("ata-WDC_WD2003FYYS-70W080_WJUN0123456" "DRIVE2" "DRIVE3" "DRIVE4") # Show status echo "Current drive status: " for drive in ${StringArray[@]}; do hdparm -W /dev/disk/by-id/$drive done # Enable write caching for drive in ${StringArray[@]}; do hdparm -W 1 /dev/disk/by-id/$drive done # Show status again echo "Finished running, check if the write cache was enabled!" for drive in ${StringArray[@]}; do hdparm -W /dev/disk/by-id/$drive done Replace the serial codes in the StringArray and you should be good to go! Pop it in User Scripts.