Leaderboard

Popular Content

Showing content with the highest reputation on 04/29/19 in all areas

  1. This Docker Application will let you view your storage controllers & the drives attached to them and perform Benchmarks on both. Controller Benchmarks helps to identify if the drives attached to it will potentially exceed the capacity of the controller if they were all fully being read from at the same time (such as during a Parity check). Drive Benchmarks lets you monitor the performance over time to look for desegregation or unexpected slow areas while getting a clean SMART report. Installation Via the Community Application: Search for "DiskSpeed" Manual Installation (The Community Applications plugin is having issues currently, here's a work around for now) Save the attached "my-DiskSpeed.xml" file to your NAS under \\tower\flash\config\plugins\dockerMan\templates-user View the Docker tab in your unRAID Administrator , click on "Add Container" Under "Select a template", pick "my-DiskSpeed" The defaults should work as-is unless you have port 18888 already in use. If so, change the Web Port & WebUI settings to a new port number. The Docker will create a directory called "DiskSpeed" in your appdata directory to hold persistent data. Note: Privileged mode is required so that the application can see the controllers & drives on the host OS. This docker will use up to 512MB of RAM. RAM optimization will happen in a later BETA. Running View the Docker tab in your unRAID Administrator and click on the icon next to "DiskSpeed" and select WebUI. Drive Images As of this December 2022, the Hard Drive Database (HDDB) has 3,000+ drive models in 70+ brands. If you have one or more drives that do not have a predefined image in the HDDB, you have a couple options available - wait for me to add the image which will be displayed after you click "Rescan Controllers" or you can add the drive yourself by editing it and uploading a drive image for it. You can view drive images in the HDDB to see if there's an image that'll fit your drive and optionally upload it so others can benefit. Controller & Drive Identification Issues Some drives, notably SSD's, do not reveal the Vendor correctly or at all. If you view the Drive information and it has the same value for the vendor as the model or an incorrect or missing Vendor, please inform me so that I can manually add the drive to the database or add code to handle it. If you have a controller that is not detected, please notify me. Benchmarking Drives Disk Drives with platters are benchmarked by reading the drive at certain percentages for 15 seconds and averages the speed for each second except for the first 2 seconds which tends to trend high. Since drives can be accessed while testing, if a min/max read speed exceeds a threshold, the test is re-performed with an increasing threshold to account for drives with bad areas. Solid State drives are benchmarked by writing large files to the device and then reading them back. In order to benchmark SSD's, they must be mounted in UNRAID and a mapping configured in the DiskSpeed Docker settings. You must restart the DiskSpeed app after mounting a device for it to be detected. For other Docker installations, an example is -v '/mnt':'/mnt/Host':'rw' if you have all your SSD's mounted under /mnt. You may need more than one volume parameter if they are mounted in different areas. Contributing to the Hard Drive Database If you have a drive that doesn't have information in the Hard Drive Database other than the model or you've performed benchmark tests, a button will be displayed at the bottom of the page labeled "Upload Drive & Benchmark Data to the Hard Drive Database". The HDDB will display information given up by the OS for the drives and the average speed graphs for comparison. Application Errors If you get an error message, please post the error here and the steps you took to cause it to happen. There will be a long string of java diagnostics after the error message (java stack) that you do not need to include, just the error message details. If you can't get past the Scanning Hardware screen, change the URL from http://[ip]:[port]/ScanControllers.cfm to http://[ip]:[port]/isolated/CreateDebugInfo.cfm and hit enter. Note: The unRAID diagnostic file doesn't provide any help. If submitting a diagnostic file, please use the link at the bottom of the controllers in the Diskspeed GUI. Home Screen (click top label to return to this screen) Controller Information Drive Information While the system cache is bypassed when benchmarking, some devices have a built-in cache that ignores cache bypass commands. An initial high write speed that quickly levels out is a sign of such as shown below. Drive Editor my-DiskSpeed.xml
    1 point
  2. Hi All, I was wondering is there a way to set up a onboard WiFi device rather than having to use Ethernet? I don't mean to passthrough for a VM to use but rather as a bridge for all VM's to use. Thanks, Jamie
    1 point
  3. Please could the drivers for the Ryzen APUs be added. I believe the prerequisite kernel version is 4.15, and we're on 4.19.33 on the latest RC. It was mentioned here by @eschultz, but ive never seen any mention of it getting implemented, or if it has been, how to enable it: Id like to use the GPU to aid in transcoding in my plex docker (which while undocumented on the plex side, it does apparently work). Even if it wasn't enabled by default, and required adding boot code(s) in syslinux, or a modprobe command in the go file, id be happy! Or even if there was documentation somewhere on creating a custom kernel with the driver enabled? The 2400G is a little workhorse, and adding GPU transcoding would make it a pretty amazing!
    1 point
  4. It could be! You might want to go through a power off/on sequence to make sure. If the system is still not seeing the drive then it has probably really physically failed.
    1 point
  5. Good to see another 7d2d fan on here. I'll have to give this one a shot- have used ubuntu in the past for hosting my server but switched to windows recently. Docker should make things even easier, thanks for the info!
    1 point
  6. You can use and 8087 to 8088 adapter for external connections or an intel or hp sas expansion card with an output. Either way it doesn’t affect fan speed, server doesn’t care in my experience.
    1 point
  7. Those accounts that got compromised also got an email asking them to change password and change github api key. Linuxserver.io did not get any email. Personally I got an email.
    1 point
  8. I ordered an coffee cup in February along with my liscense key
    1 point
  9. Both disks on the Asmedia controller dropped offline, if it's an ad-don card check it's well seated.
    1 point
  10. @shanestorey has been following on Twitter for a while. Not often I've paid for software when there is a free alternative, Unraid is definitely worth it !!
    1 point
  11. A server reboot will be necessary after unplugging the USB stick from a live system, in which case the disks are now encrypted and awaiting proper key input. The attack vector is a bit long winded and will probably be a very directed attack. Additionally, this already means the knowledgable attacker is on premise and on hand to the server. I don't know if the typical user warrants such a level of security, but I guess its possible to wrap the entire USB boot process with an optional mini encrypted disk image on the USB drive. I can definitely guess that Limetech cannot afford such a development direction, unless they have a number of enterprise customers and support contracts lined up for the resulting security featured product (as the resulting encryption makes it difficult for normal users to fix any issues) Personally, I'd rather store the usb boot drive inside the server chassis (physically hardlocked with necessary alarms) which mitigates the perceived attack vector.
    1 point
  12. You have to remember that Plex only use the GPU for encoding the video. That means that the video decoding is done by your cpu. In addition, the audio will always be transcoded by your cpu. There are multiple posts in this thread about how to get plex to decode also, but it's not support by plex. You could try emby, but you have to pay to get hardware transcoding.
    1 point
  13. I use a few h220 in a couple servers, which is an lsi card essentially. A couple lsi branded model cards are popular in here and can be found by googling “unRaid hba recommendation” (don’t use the forum search, it’s junk) but I can’t recall them off the top of my head. proliants can be finicky, but also great workhorses.
    1 point
  14. you're probably going to have to get an hba card. unraid doesn't play well with onboard hp P4xx raid controllers in my experience, regardless of mode.
    1 point
  15. https://www.pcworld.com/article/3314983/intel-905p-nvme-ssd-review-blazing-random-access-and-amazing-endurance-for-a-hefty-price.amp.html Here's a good example of an nvme device that boasts strong endurance. Really really strong endurance Obviously the quality of flash used will determine the durability, but that's true regardless of the protocol. Sent from my Pixel 3 XL using Tapatalk
    1 point
  16. @unstatic FiveM Docker will go in the next few hours live.
    1 point
  17. For bandwidth, you are 100% right. Especially since this x16 slot has only x8 routing. And I am not looking at adding or expecting to use this build for any multi-GPU card acceleration features that the additional lanes would help with anyway. (This is a notable downside on the T310 architecture compared to other modern server mobo designs, but something I can probably live with given the price I've paid.) But it also "might" make a slight difference for power, since full-sized ×16 graphics card (without additional 6 or 8 pin connectors) can typically draw up to 5.5 A at +12 V (66 W) from those slots. Dell may have limited it to 40watts, so that makes GPU card selection a little trickier. And I didn't see any specific documentation on the x8, but noted that x4 cards are typically limited to 25 watts. Seems from what I've read in other forums that the x8 slots are potentially limited to 25w in the same way. Again, given that the T310 server's power supply is 400watts, I don't want to even fool with molex to other 6/8 pin PCIe power plug adapters.
    1 point
  18. 1. Do you have backups of anything important? 2. Are all your drives verified as currently healthy? 3. Have you done a parity check with zero errors in the past few days? If the answer to all three is yes, the quickest and simplest method is to do a new config and assign all the drives exactly as you wish and let parity build to the new 12TB. No need for copying anything anywhere. Take a backup of your flash by clicking on it in the main GUI and downloading the backup. Make a note of all your current drive serial number assignment positions. Set a new config with preserve all checked on the tools tab of the gui. Make changes by selecting one of the 12TB as parity, which will allow you to assign the current 4TB to a data slot, assign the second 12TB to another data slot. Start the array and allow parity to be built. After a successful parity build, do a non correcting check. After that comes back with zero errors, verify that only the old 4TB parity and 12TB data drive show as unformatted, and select the checkbox to allow them to be formatted. Change any share inclusions and exclusions to use the new drive slots as you see fit. Don't proceed through the steps until you understand what each step does and what constitutes a proper outcome step by step.
    1 point
  19. I believe this would work: docker run -d \ --name=dokuwiki \ -p 8443:443 \ -v /docker/appdata/dokuwiki:/config:rw \ linuxserver/dokuwiki The "-p" is the port you want exposed by changing the "8443" to whatever if you don't like 8443. Please note the maintainer left a message "This is a Container in active development by the LinuxServer.io team and is not recommended for use by the general public." Edit: Another docker developer has a container as well if the first doesn't work for you. docker run -d \ --name=dokuwiki \ -p 8880:80 \ -p 8443:443 \ -v /docker/appdata/dokuwiki:/bitnami:rw \ bitnami/dokuwiki
    1 point
  20. Just to follow up on this, emulation flag has been enabled in Unraid (at least on version 6.6.6 that I am running now, not sure where it got changed) so we can now run dockers like this 7dtd server and other steamcmd servers. - For setup, I went into apps. Bottom left in settings I made sure to enable DockerHub. Went back to apps, did a search for "7dtd" (which reveals no results, but also a link to dockerhub). Once clicked on the link it shows me various versions of it, I used the one from didstopia. - This brings up the docker setup menu. I went to the dockerfile and looked up the folder, ports and variables I need to set. I set up a path for /steamcmd/7dtd to a folder on my cache drive. Set up all the ports, and set up all the variables (in case I need to change them in the future they will be ready). See below for an example (though the variables aren't shown in this view). Note that you don't need to add "" as part of the keys, just type in the stuff and it adds it automatically. - I set up port forwarding on my router for 26900 thru 26902 to my unraid server. - The docker was started the first time for it to build and download all the files though it promptly crashed (using A17.2). Stopped the container. - Had to log in thru telnet to the server. Go to my 7dtd folder on the cache and navigate to the server_data folder. This is where the real serverconfig.xml is stored (ignore the one in the main 7dtd folder). I had to "chmod 777 serverconfig.xml" in order to make changes. - Then I navigated to my folder thru SMB and found this serverconfig.xml file and opened it using Notepad++ (not the regular notepad). I had to delete the complete lines for ZombiesRun and BlockDurabilityModifier. This is critical, I tried to comment them out but didn't do it right, took me a while to find out that I just needed to completely wipe it out to make sure I was doing it right. - I went ahead and altered the files to my preferred settings, and I added some of the new setting for A17.2. See this. - After this it was able to start, and I was able to log in. I hope this helps anyone that may be wanting to add 7dtd or other similar dedicated server dockers to their unraid. I found help all across this forum and others (some of which are linked), but also the good Docker FAQ here in the forums helped quite a bit. Update: There is now a proper docker maintained by ich777, its what I use now and he made it very easy to install!
    1 point
  21. We have had nothing but problems with ADATA branded drives. It is definitely not anything Unraid is doing. They are just a cheap brand with very poor reliability. Sent from my Pixel 3 XL using Tapatalk
    1 point
  22. Sorry, im not english speaking, so i didnt want to be rough, just tell the truth. I had it running with 4gb (2gb for VM) it was working "fine", even some plugins. So this must be on your end. - I just had to auto restart all dockers every day because of some random dockers filling up ram. Anyway, RAM prices are cheap, and it cost around 1-2 watt per stick, so usually doubling ram dont cost anything. Windows 10 You wouldnt install windows 10 on a 1ghz and 1gb ram machine, or? Its just the minimum it can run, nothing more, nothing less.
    1 point
  23. Pretty much anything that fails was working fine at some point.
    1 point
  24. might help if you tagged someone who would know... like... @limetech @jonp
    1 point
  25. Qemu 4.0 is out. I wonder if it’ll be on the next rc release for 6.7... Sent from my iPhone using Tapatalk
    1 point
  26. I know the card works because it actually did work for a whole glorious hour yesterday at 18:50 I got 10 streams with like only 9-10% load it's a Quadro P2000 everyone wrote that this was the one to get... and then after a reboot it just stopped working today? Since then I have tried to think what could have changed? (The privileged setting was one thing!) Okay so changing the Go file to: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & #Setup drivers for hardware transcoding in Plex #modprobe i915 #chown -R nobody:users /dev/dri #chmod -R 777 /dev/dri Again thanks for your help!
    1 point
  27. Bilged the last two posts in this topic - please keep it civil people.
    1 point
  28. Really man! There are plenty of satisfied users without issues. Sorry if doesn't work for you or you don't know how to use it properly. In the past four years using my plugin it reported a few corrupted files due to a power outage, a few mismatched files due to a flaky program and other then that no errors. It works as designed.
    1 point
  29. After a ton of Google-fu I was able to resolve my problem. TL;DR Write cache on drive was disabled found an page called How to Disable or Enable Write Caching in Linux. The artical covers both ata and scsi drives which i needed as SAS drive are scsi and are a total different beast. root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 0 [cha: y, def: 0, sav: 0] This shows that the write cache disabled root@Thor:/etc# sdparm --set=WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 This enables it and my writes returned to the expected speeds root@Thor:/etc# sdparm -g WCE /dev/sdd /dev/sdd: HGST HUS726040ALS214 MS00 WCE 1 [cha: y, def: 0, sav: 0] confirms the write cache has been set Now I'm not total sure why the write cache was disabled under unraid, bug or feature? While doing my googling there was a mention of a kernel bug a few years ago that if system ram was more then 8G it disables the write cache. My current system has a little more then 8G so maybe?
    1 point
  30. As someone else that lives in an apartment this would be a huge help to me as well. As I am getting ready to build my first server.
    1 point
  31. I see this thread has been idle for a month or so but I found it because it is the most recent thread that comes up on google when searching for "unraid wireless"... which I think is a much needed feature based on quite a few threads coming up on google asking for it. I live in a small one bedroom apartment where the only wired access to the internet is at a small desk between my bathroom and bedroom doors. There is not enough space there for my unRAID server and my gaming PC. I have placed my unRAID server in my living room by the TV because that is the only place I really have space. I also plan on having it run a gaming VM to play VR off of in the future (so I don't have to move my gaming PC back and forth for VR and normal PC games). I purchased an AC power adapter like this one based on most threads saying that is the best option... but I think my apartment building has too much interference on the AC power lines because I only get around 6megabits/sec... its not like its going that far either, my apartment is small. Running an Ethernet cable across the floor in front of doorways is not an option, and since I am renting the place I can't drill into the walls or tear up the carpet to run cables either... I own a pcie wireless card that I could add to my server but unRAID doesn't support it. Like pwm said, the important thing is that different users have different use cases and different needs. Even though it is optimal to have a wired connection, it is not always practical/possible, especially when wireless would be a better/faster solution for people like me where any possible wired option is actually slower than my wireless speeds... so slow that I can't even stream plex movies to my chromecast from it... Anyone know if there is a way to contact the devs for a formal feature request? I feel like this feature should at least be given a look at.
    1 point
  32. I was thinking more the Linux route e.g. using wpaconfig to set up a wireless network, if at all possible.
    1 point
  33. The plugin works with the extended attributes of the files to attach and verify hash values. It doesn't rely on external files. Export can be used to keep a copy of the calculated hash values. This is completely optional though. An initial build is required, and normally files get automatically updated afterwards (when protection is enabled). A notification is given when files are found which didn't get a hash value attached, these should be exceptions though.
    1 point