Jump to content


Popular Content

Showing content with the highest reputation on 04/29/19 in all areas

  1. 1 point
    This Docker Application will let you view your storage controllers & the drives attached to them and perform Benchmarks on both. Controller Benchmarks helps to identify if the drives attached to it will potentially exceed the capacity of the controller if they were all fully being read from at the same time (such as during a Parity check). Drive Benchmarks lets you monitor the performance over time to look for desegregation or unexpected slow areas while getting a clean SMART report. Installation Via the Community Application: Search for "DiskSpeed" Manual Installation (The Community Applications plugin is having issues currently, here's a work around for now) Save the attached "my-DiskSpeed.xml" file to your NAS under \\tower\flash\config\plugins\dockerMan\templates-user View the Docker tab in your unRAID Administrator , click on "Add Container" Under "Select a template", pick "my-DiskSpeed" The defaults should work as-is unless you have port 18888 already in use. If so, change the Web Port & WebUI settings to a new port number. The Docker will create a directory called "DiskSpeed" in your appdata directory to hold persistent data. Note: Privileged mode is required so that the application can see the controllers & drives on the host OS. This docker will use up to 512MB of RAM. RAM optimization will happen in a later BETA. Running View the Docker tab in your unRAID Administrator and click on the icon next to "DiskSpeed" and select WebUI. A new window will open. On first time run (or after the Docker app is updated or you select to rescan hardware), the application will scan your system to locate drive controllers & the hard drives attached to them. Drive Images As of this post, the Hard Drive Database (HDDB) has 900+ drive models in 20+ brands. If you have one or more drives that do not have a predefined image in the HDDB, you have a couple options available - wait for me to add the image which will be displayed after you click "Rescan Controllers" or you can add the drive yourself by editing it and uploading a drive image for it. You can view drive images in the HDDB to see if there's an image that'll fit your drive and optionally upload it so others can benefit. Controller & Drive Identification Issues Some drives, notably SSD's, do not reveal the Vendor correctly or at all. If you view the Drive information and it has the same value for the vendor as the model or an incorrect or missing Vendor, please inform me so that I can manually add the drive to the database or add code to handle it. If you have a controller that is not detected, please notify me. Benchmarking Drives The current method of benchmarking the hard drives is to read the hard drive at certain percentages for 15 seconds and takes the average speed over each of those seconds except for the first 2 seconds which tend to trend high. Hard Drives report an optimal block size to use while reading but if not, a block size of 128K is used. Since Docker under unRAID requires the array to be running, SpeedGap detection was added to detect disk drive activity during the test by comparing the smallest & largest amount of bytes read over the 15 seconds. If a gap is detected over a given size which starts at 45MB, the gap allowed is increased by 5MB and the spot retested. If a drive keeps triggering the SpeedGap detection over each spot, you may need to disable the SpeedGap detection as the drive is very erratic in its read speeds. One drive per controller is tested at the same time. In the future, each drive will be tested to see it's maximum transfer rate and the system will be tested to see how much data each controller can transfer and how much data the entire system bus can handle. Then multiple drives over multiple controllers will be read simultaneously while keeping the overall bandwidth by controller & system under the maximum transfer rate. Contributing to the Hard Drive Database If you have a drive that doesn't have information in the Hard Drive Database other than the model or you've performed benchmark tests, a button will be displayed at the bottom of the page labeled "Upload Drive & Benchmark Data to the Hard Drive Database". The HDDB will display information given up by the OS for the drives and the average speed graphs for comparison. Application Errors If you get an error message, please post the error here and the steps you took to cause it to happen. There will be a long string of java diagnostics after the error message (java stack) that you do not need to include, just the error message details. If you can't get past the Scanning Hardware screen, change the URL from http://[ip]:[port]/ScanControllers.cfm to http://[ip]:[port]/isolated/CreateDebugInfo.cfm and hit enter. Note: The unRAID diagnostic file doesn't provide any help. If submitting a diagnostic file, please use the link at the bottom of the controllers in the Diskspeed GUI. Home Screen (click top label to return to this screen) Controller Information Drive Information Drive Editor my-DiskSpeed.xml
  2. 1 point
    Hi All, I was wondering is there a way to set up a onboard WiFi device rather than having to use Ethernet? I don't mean to passthrough for a VM to use but rather as a bridge for all VM's to use. Thanks, Jamie
  3. 1 point
    Please could the drivers for the Ryzen APUs be added. I believe the prerequisite kernel version is 4.15, and we're on 4.19.33 on the latest RC. It was mentioned here by @eschultz, but ive never seen any mention of it getting implemented, or if it has been, how to enable it: Id like to use the GPU to aid in transcoding in my plex docker (which while undocumented on the plex side, it does apparently work). Even if it wasn't enabled by default, and required adding boot code(s) in syslinux, or a modprobe command in the go file, id be happy! Or even if there was documentation somewhere on creating a custom kernel with the driver enabled? The 2400G is a little workhorse, and adding GPU transcoding would make it a pretty amazing!
  4. 1 point
    It could be! You might want to go through a power off/on sequence to make sure. If the system is still not seeing the drive then it has probably really physically failed.
  5. 1 point
    Good to see another 7d2d fan on here. I'll have to give this one a shot- have used ubuntu in the past for hosting my server but switched to windows recently. Docker should make things even easier, thanks for the info!
  6. 1 point
    You can use and 8087 to 8088 adapter for external connections or an intel or hp sas expansion card with an output. Either way it doesn’t affect fan speed, server doesn’t care in my experience.
  7. 1 point
    Those accounts that got compromised also got an email asking them to change password and change github api key. Linuxserver.io did not get any email. Personally I got an email.
  8. 1 point
    I ordered an coffee cup in February along with my liscense key
  9. 1 point
    Both disks on the Asmedia controller dropped offline, if it's an ad-don card check it's well seated.
  10. 1 point
    @shanestorey has been following on Twitter for a while. Not often I've paid for software when there is a free alternative, Unraid is definitely worth it !!
  11. 1 point
    A server reboot will be necessary after unplugging the USB stick from a live system, in which case the disks are now encrypted and awaiting proper key input. The attack vector is a bit long winded and will probably be a very directed attack. Additionally, this already means the knowledgable attacker is on premise and on hand to the server. I don't know if the typical user warrants such a level of security, but I guess its possible to wrap the entire USB boot process with an optional mini encrypted disk image on the USB drive. I can definitely guess that Limetech cannot afford such a development direction, unless they have a number of enterprise customers and support contracts lined up for the resulting security featured product (as the resulting encryption makes it difficult for normal users to fix any issues) Personally, I'd rather store the usb boot drive inside the server chassis (physically hardlocked with necessary alarms) which mitigates the perceived attack vector.
  12. 1 point
    You have to remember that Plex only use the GPU for encoding the video. That means that the video decoding is done by your cpu. In addition, the audio will always be transcoded by your cpu. There are multiple posts in this thread about how to get plex to decode also, but it's not support by plex. You could try emby, but you have to pay to get hardware transcoding.
  13. 1 point
    I use a few h220 in a couple servers, which is an lsi card essentially. A couple lsi branded model cards are popular in here and can be found by googling “unRaid hba recommendation” (don’t use the forum search, it’s junk) but I can’t recall them off the top of my head. proliants can be finicky, but also great workhorses.
  14. 1 point
    you're probably going to have to get an hba card. unraid doesn't play well with onboard hp P4xx raid controllers in my experience, regardless of mode.
  15. 1 point
    https://www.pcworld.com/article/3314983/intel-905p-nvme-ssd-review-blazing-random-access-and-amazing-endurance-for-a-hefty-price.amp.html Here's a good example of an nvme device that boasts strong endurance. Really really strong endurance Obviously the quality of flash used will determine the durability, but that's true regardless of the protocol. Sent from my Pixel 3 XL using Tapatalk
  16. 1 point
    @unstatic FiveM Docker will go in the next few hours live.
  17. 1 point
    For bandwidth, you are 100% right. Especially since this x16 slot has only x8 routing. And I am not looking at adding or expecting to use this build for any multi-GPU card acceleration features that the additional lanes would help with anyway. (This is a notable downside on the T310 architecture compared to other modern server mobo designs, but something I can probably live with given the price I've paid.) But it also "might" make a slight difference for power, since full-sized ×16 graphics card (without additional 6 or 8 pin connectors) can typically draw up to 5.5 A at +12 V (66 W) from those slots. Dell may have limited it to 40watts, so that makes GPU card selection a little trickier. And I didn't see any specific documentation on the x8, but noted that x4 cards are typically limited to 25 watts. Seems from what I've read in other forums that the x8 slots are potentially limited to 25w in the same way. Again, given that the T310 server's power supply is 400watts, I don't want to even fool with molex to other 6/8 pin PCIe power plug adapters.
  18. 1 point
    1. Do you have backups of anything important? 2. Are all your drives verified as currently healthy? 3. Have you done a parity check with zero errors in the past few days? If the answer to all three is yes, the quickest and simplest method is to do a new config and assign all the drives exactly as you wish and let parity build to the new 12TB. No need for copying anything anywhere. Take a backup of your flash by clicking on it in the main GUI and downloading the backup. Make a note of all your current drive serial number assignment positions. Set a new config with preserve all checked on the tools tab of the gui. Make changes by selecting one of the 12TB as parity, which will allow you to assign the current 4TB to a data slot, assign the second 12TB to another data slot. Start the array and allow parity to be built. After a successful parity build, do a non correcting check. After that comes back with zero errors, verify that only the old 4TB parity and 12TB data drive show as unformatted, and select the checkbox to allow them to be formatted. Change any share inclusions and exclusions to use the new drive slots as you see fit. Don't proceed through the steps until you understand what each step does and what constitutes a proper outcome step by step.
  19. 1 point
    I believe this would work: docker run -d \ --name=dokuwiki \ -p 8443:443 \ -v /docker/appdata/dokuwiki:/config:rw \ linuxserver/dokuwiki The "-p" is the port you want exposed by changing the "8443" to whatever if you don't like 8443. Please note the maintainer left a message "This is a Container in active development by the LinuxServer.io team and is not recommended for use by the general public." Edit: Another docker developer has a container as well if the first doesn't work for you. docker run -d \ --name=dokuwiki \ -p 8880:80 \ -p 8443:443 \ -v /docker/appdata/dokuwiki:/bitnami:rw \ bitnami/dokuwiki
  20. 1 point
    Just to follow up on this, emulation flag has been enabled in Unraid (at least on version 6.6.6 that I am running now, not sure where it got changed) so we can now run dockers like this 7dtd server and other steamcmd servers. - For setup, I went into apps. Bottom left in settings I made sure to enable DockerHub. Went back to apps, did a search for "7dtd" (which reveals no results, but also a link to dockerhub). Once clicked on the link it shows me various versions of it, I used the one from didstopia. - This brings up the docker setup menu. I went to the dockerfile and looked up the folder, ports and variables I need to set. I set up a path for /steamcmd/7dtd to a folder on my cache drive. Set up all the ports, and set up all the variables (in case I need to change them in the future they will be ready). See below for an example (though the variables aren't shown in this view). Note that you don't need to add "" as part of the keys, just type in the stuff and it adds it automatically. - I set up port forwarding on my router for 26900 thru 26902 to my unraid server. - The docker was started the first time for it to build and download all the files though it promptly crashed (using A17.2). Stopped the container. - Had to log in thru telnet to the server. Go to my 7dtd folder on the cache and navigate to the server_data folder. This is where the real serverconfig.xml is stored (ignore the one in the main 7dtd folder). I had to "chmod 777 serverconfig.xml" in order to make changes. - Then I navigated to my folder thru SMB and found this serverconfig.xml file and opened it using Notepad++ (not the regular notepad). I had to delete the complete lines for ZombiesRun and BlockDurabilityModifier. This is critical, I tried to comment them out but didn't do it right, took me a while to find out that I just needed to completely wipe it out to make sure I was doing it right. - I went ahead and altered the files to my preferred settings, and I added some of the new setting for A17.2. See this. - After this it was able to start, and I was able to log in. I hope this helps anyone that may be wanting to add 7dtd or other similar dedicated server dockers to their unraid. I found help all across this forum and others (some of which are linked), but also the good Docker FAQ here in the forums helped quite a bit. Update: There is now a proper docker maintained by ich777, its what I use now and he made it very easy to install!
  21. 1 point
    We have had nothing but problems with ADATA branded drives. It is definitely not anything Unraid is doing. They are just a cheap brand with very poor reliability. Sent from my Pixel 3 XL using Tapatalk
  22. 1 point
    Sorry, im not english speaking, so i didnt want to be rough, just tell the truth. I had it running with 4gb (2gb for VM) it was working "fine", even some plugins. So this must be on your end. - I just had to auto restart all dockers every day because of some random dockers filling up ram. Anyway, RAM prices are cheap, and it cost around 1-2 watt per stick, so usually doubling ram dont cost anything. Windows 10 You wouldnt install windows 10 on a 1ghz and 1gb ram machine, or? Its just the minimum it can run, nothing more, nothing less.
  23. 1 point
    Pretty much anything that fails was working fine at some point.
  24. 1 point
    might help if you tagged someone who would know... like... @limetech @jonp
  25. 1 point
    Qemu 4.0 is out. I wonder if it’ll be on the next rc release for 6.7... Sent from my iPhone using Tapatalk
  26. 1 point
    I know the card works because it actually did work for a whole glorious hour yesterday at 18:50 I got 10 streams with like only 9-10% load it's a Quadro P2000 everyone wrote that this was the one to get... and then after a reboot it just stopped working today? Since then I have tried to think what could have changed? (The privileged setting was one thing!) Okay so changing the Go file to: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & #Setup drivers for hardware transcoding in Plex #modprobe i915 #chown -R nobody:users /dev/dri #chmod -R 777 /dev/dri Again thanks for your help!
  27. 1 point
    Bilged the last two posts in this topic - please keep it civil people.
  28. 1 point
    Really man! There are plenty of satisfied users without issues. Sorry if doesn't work for you or you don't know how to use it properly. In the past four years using my plugin it reported a few corrupted files due to a power outage, a few mismatched files due to a flaky program and other then that no errors. It works as designed.
  29. 1 point
    After a ton of Google-fu I was able to resolve my problem. TL;DR Write cache on drive was disabled found an page called How to Disable or Enable Write Caching in Linux. The artical covers both ata and scsi drives which i needed as SAS drive are scsi and are a total different beast. root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 0 [cha: y, def: 0, sav: 0] This shows that the write cache disabled root@Thor:/etc# sdparm --set=WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 This enables it and my writes returned to the expected speeds root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 1 [cha: y, def: 0, sav: 0] confirms the write cache has been set Now I'm not total sure why the write cache was disabled under unraid, bug or feature? While doing my googling there was a mention of a kernel bug a few years ago that if system ram was more then 8G it disables the write cache. My current system has a little more then 8G so maybe?
  30. 1 point
    As someone else that lives in an apartment this would be a huge help to me as well. As I am getting ready to build my first server.
  31. 1 point
    I see this thread has been idle for a month or so but I found it because it is the most recent thread that comes up on google when searching for "unraid wireless"... which I think is a much needed feature based on quite a few threads coming up on google asking for it. I live in a small one bedroom apartment where the only wired access to the internet is at a small desk between my bathroom and bedroom doors. There is not enough space there for my unRAID server and my gaming PC. I have placed my unRAID server in my living room by the TV because that is the only place I really have space. I also plan on having it run a gaming VM to play VR off of in the future (so I don't have to move my gaming PC back and forth for VR and normal PC games). I purchased an AC power adapter like this one based on most threads saying that is the best option... but I think my apartment building has too much interference on the AC power lines because I only get around 6megabits/sec... its not like its going that far either, my apartment is small. Running an Ethernet cable across the floor in front of doorways is not an option, and since I am renting the place I can't drill into the walls or tear up the carpet to run cables either... I own a pcie wireless card that I could add to my server but unRAID doesn't support it. Like pwm said, the important thing is that different users have different use cases and different needs. Even though it is optimal to have a wired connection, it is not always practical/possible, especially when wireless would be a better/faster solution for people like me where any possible wired option is actually slower than my wireless speeds... so slow that I can't even stream plex movies to my chromecast from it... Anyone know if there is a way to contact the devs for a formal feature request? I feel like this feature should at least be given a look at.
  32. 1 point
    I was thinking more the Linux route e.g. using wpaconfig to set up a wireless network, if at all possible.
  33. 1 point
    The plugin works with the extended attributes of the files to attach and verify hash values. It doesn't rely on external files. Export can be used to keep a copy of the calculated hash values. This is completely optional though. An initial build is required, and normally files get automatically updated afterwards (when protection is enabled). A notification is given when files are found which didn't get a hash value attached, these should be exceptions though.