Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 07/16/19 in all areas

  1. 4 points
    Well f off and fork your own. Been running my install since we released without a problem. I got no time for comments like this and am going to treat it with the contempt it deserves. All the code is open source feel free to make a pull request, however, I've yet to see a single person make a comment of this nature actually do so, and I'm not expecting you to be the first. Sent from my Mi A1 using Tapatalk
  2. 3 points
    Fundamentally we're volunteers, so if you think something needs doing, feel free to step up and contribute rather than say something is a joke. Yes, I do hear comments like that all too often and I treat them all in the same way. When people pay my mortgage and feed my family for the stuff we do then they have the right to complain, because we then we have a customer/client relationship. But when people are using something, for free, made by us in our spare time, I reserve the right to use whatever language I see fit, as our relationship is most certainly not a customer/client one, essentially we have no obligation to do anything. One person saying something as non-specific as "PHP is a joke" does not a problem make, and is not something I'm going to waste any more time with. Sent from my Mi A1 using Tapatalk
  3. 3 points
    Attached is a debugged version of this script modified by me. I've eliminated almost all of the extraneous echo calls. Many of the block outputs have been replaced with heredocs. All of the references to md_write_limit have either been commented out or removed outright. All legacy command substitution has been replaced with modern command substitution The script locates mdcmd on it's own. https://paste.ee/p/wcwWV
  4. 2 points
    The critical piece to get your head around is whether or not parity is still valid. If all your drives are perfectly healthy, rebuilding parity is no big deal, it's just time consuming. If, on the other hand, you have a drive failure, or two, then reassigning the remaining drives to the correct slots is the only way to emulate the failed drives so you can rebuild them. It gets complicated, you have to use the command line before starting the array and after setting the correct assignments, but it's doable. For safety's sake, it's always best to keep a current list of which serial number drives are assigned where. Taking a fresh flash backup each time something major is changed is wise. In your scenario where you are making good guesses as to which drive goes where, if you needed to rebuild a single failed drive, then yes, you would only assign parity1. However, if you were truly guessing, you would only have a 50% chance of getting it right, and may end up having to repeat the trust parity rebuild a second time with the correct parity drive, as putting parity2 in parity1's slot would result in invalid parity and a failed rebuild. Again, if you don't care if parity is invalid when you assign the drives, the only thing that matters is that you properly ID the parity drives in either order, and you can build parity from the healthy data drives. It's only in a failed drive scenario where you need to recover that it's important. You can build either or both parity drives at any time as long as your data drives stay healthy. The OP's recovery scenario was risky, power failures can kill drives as well as corrupt data. I was trying to detail a recovery path with the highest possible chance of success.
  5. 2 points
    I'm trying to take another look at DVB/Nvidia integration no ETA at the moment.
  6. 2 points
    To add to jonathanm's answer, the script starts a series of partial, read-only, non-correcting parity checks, each with slightly tweaked parameters, and logs the performance of each combination of settings. Essentially, it is measuring the peak performance of reading from all discs simultaneously, and showing how that can be tweaked to improve peak performance. Improving peak performance is not the same thing as improving the time of a full parity check, as your parity check only spends a few minutes at the beginning of the drives where peak performance has an impact, as performance gradually tapers off from the beginning of your drive to the end. If your peak performance was abnormally slow (i.e. 50 MB/s), then that would affect a much larger percentage of the parity check, and improving that to 150MB/s would make a huge improvement in parity check times, but increasing from 164 MB/s to 173 MB/s won't make much of a difference, since essentially you were already close to max performance and that small increase will only affect perhaps the first few % of the drive space. In a similar way, I could improve aerodynamics on my car to increase top speed from 164 MPH to 173 MPH, but that won't necessarily help my work commute where I'm limited to speeds below 65 MPH. But if for some reason my car couldn't go faster than 50 MPH, any increase at all would help my commute time. There are a handful of drive controllers (like the one in my sig) that suffer extremely slow parity check speeds with stock Unraid settings, so I see a huge performance increase from tweaking the tunables. There is also some evidence that tweaking these tunables can help with multi-tasking (i.e. streaming a movie without stuttering during a parity check), and for some users this seems to be true. I know there are some users who have concerns that maximizing parity check speed takes away bandwidth for streaming, though I don't think we ever actually saw evidence of this. That's a shame, as that is really what is needed to make this script compatible with 6.x. LT changed the tunables from 5.x to 6.x, and the original script needs updating to work properly with the 6.x tunables. Fixing a few obsolete code segments to make it run without errors on 6.x doesn't mean you will get usable results on 6.x. I had created a beta version for Unraid 6.x a while back, but testing showed it was not producing usable results. I documented a new tunables testing strategy based on those results, but never did get around to implementing them. It seems that finding good settings on 6.x is harder than it was for 5.x - possibly because 6.x just runs better and there's less issues to be resolved. I still have my documented changes for the next version around here somewhere... That's another shame. Seems like you know what you're doing, more so than I do with regards to Linux. I'm a Windows developer, and my limited experience with Linux and Bash (that's what this script is, right?) is this script. For me to pick it up again, I have to essentially re-learn everything. I keep thinking a much stronger developer than I will pick this up someday. I'm not trying to convince users not to use this tool, and I certainly appreciate someone trying to keep it alive, but I did want to clarify that the logic needs improvement for Unraid 6.x, and you may not get accurate recommendations with this Unraid 5.x tunables tester. Paul
  7. 2 points
    Hey everyone! After months of testing and learning, I finally managed to have 2 of my 4 1080TIs run in SLI. The information on how to do this has actually been online for a while but a bit scattered about (at least that was my experience). Overview of Steps 1) Achieve GPU pass-through 2) Mod the Nvidia Drivers to allow SLI in our VM 3) Use Nvidia Profile Inspector to get much better performance Edit Update After utilizing a few other VM optimizations, specifically CPU pinning, my SLI performance DRASTICALLY improved. My FPS in SLI went from the mid 40s to 70+ (I used a few different benchmarks such as the unigine benchmarks and also from personal experience playing ESO). When I started trying to get SLI to work in Unraid, I noticed that just passing through 2 GPUs to a single VM already resulted in very noticeable gain in performance. I am still tinkering with Nvidia Profile Inspector, so things might change. If they do, I will post an update. GPU pass-through My VM options: Bios: OVMF Machine: Q35.1 Sata for ISO drivers and VirtIO for Primary vDisk Follow the instructions in this Spaceinvader One video. Afterwards, pass through your 2 GPUs and they should appear in windows. Nvidia Drivers Mod Note: If you have any difficulties with this next part, you are better off asking for help here on the DifferentSLIAuto forum thread. So to my understanding motherboard manufactures must license the right to allow SLI on their boards from Nvidia. The reason we haven't been able to achieve SLI in Unraid is due to the fact that our VM's "motherboard" info simply not qualifying as a Nvidia approved motherboard for SLI. Luckily there has been a hack available for a while that allows SLI to be enabled for not only any motherboard but also any GPUs (aka, the GPUs don't even need to be the same model). This is what worked for me: I used Nvidia Driver version 430.86. If you use the same version, then these instructions SHOULD work for you. Install Nvidia Drivers The original method/program used by DifferentSLIAuto is no longer working for the latest versions of Nvidia drivers (driver versions 4xx and on). We have two choices, we can go with the old method and use an older driver or we can mod newer drivers manually. This is what I did, and what I'll be describing: Download DifferentSLIAuto version 1.7.1 Download a Hex Editor (I used HxD) Copy nvlddmkm.sys from C:\Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_b49751b9038af669 to your DifferentSLIAuto folder (NOTE: if you are not using driver version 430.86, then the nvlddmkm.sys file you must modify will be located some where else and you must find it yourself by going to Device Manager > Display adapters > YOUR CARD). Mod the copied nvlddmkm.sys file by opening it in a Hex Editor. Here are the changes for driver 430.86: Address: [OLD VALUE] [NEW CHANGED VALUE] 000000000027E86D: 84 C7 000000000027E86E: C0 43 000000000027E86F: 75 24 000000000027E870: 05 00 000000000027E871: 0F 00 000000000027E872: BA 00 000000000027E873: 6B 00 Save and exit Hex Editor In your DifferentSLIAuto folder, right click and edit install.cmd. Replace all instances of "nv_dispi.inf_amd64_7209bde3180ef5f7" with the location of where our original nvlddmkm.sys file was in our case this is "nv_dispi.inf_amd64_b49751b9038af669". The install.cmd file will modify the copy we added to the folder and replace the original one found at the location we specify here. Use this video for reference but please note that in the video the driver version is different then ours and they replace nv_dispi.inf_amd64_7209bde3180ef5f7, with nv_dispi.inf_amd64_9ab613610b40aa98 instead of nv_dispi.inf_amd64_b49751b9038af669. Move your DifferentSLIAuto Folder to the root of your C:\ drive. Set UAC to the lowest setting (OFF) in Control Panel\All Control Panel Items\Security and Maintenance. Run cmd.exe as admin and enter: bcdedit.exe /set loadoptions DISABLE_INTEGRITY_CHECKS bcdedit.exe /set NOINTEGRITYCHECKS ON bcdedit.exe /set TESTSIGNING ON Restart you computer into safe mode + network enabled (Video showing how to do it quickly using Shift + Restart) Within the DifferentSLIAuto folder located at "C:\", run install.cmd as admin. After only a few seconds the CMD window text should all be green indicating that all is well! Open up your Nvidia Control Panel and under 3d Settings it should now say "Configure SLI, Surround, PhysX". Click that option, and under SLI Configuration Select Maximize 3d performance and that's it! Nvidia Profile Inspector The default settings in the Nvidia Control Panel really suck. After FINALLY getting SLI to work I was getting only 40 FPS in SLI when before I was getting 100+ FPS prior to enabling SLI. I was about ready to give up when I came across Nvidia Profile Inspector! By changing a few settings with Nvidia Profile Inspector, I was able to finally get great SLI results (70 FPS). Keep in mind that I only changing settings in profile inspector for only a few hours, so I'm sure there are many optimizations to be made, so hopefully we can figure it out as a community. Run Nvidia Profile Inspector. I recommend the following settings for now for the _GLOBAL_DRIVER_PROFILE (Base Profile). Nvidia Profile Inspector Settings: 1 - Compatibility - SLI compatibility bits: 0x02C00005 SLI compatibility bits (DX10 + DX11): 0x080000F5 5 - Common - Power management mode: Prefer maximum performance Thread optimization: On 6 - SLI - NVIDIA predefined number of GPUs to use on SLI rendering mode: 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined number of GPUs to use on SLI rendering mode (on DirectX 10): 0x00000002 SLI_PREDEFINED_GPU_COUNT_TWO NVIDIA predefined SLI mode: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 NVIDIA predefined SLI mode on DirectX 10: PLAY WITH BOTH: 0x00000002 SLI_PREDEFINED_MODE_FORCE_AFR & 0x00000003 SLI_PREDEFINED_MODE_FORCE_AFR2 SLI rendering mode: Try: 0x00000000 SLI_RENDERING_MODE_AUTOSELECT, 0x00000002 SLI_RENDERING_MODE_FORCE_AFR. 0x00000003 SLI_RENDERING_MODE_FORCE_AFR2 MAKE SURE TO HIT APPLY CHANGES ON TOP RIGHT HAND CORNER Next we will make some changes in Control Panel > Nvidia Control Panel Nvidia Control Panel Manage 3D Settings > Global Settings Power management mode: Prefer maximum performance SLI rendering mode: start by leaving this alone, and then make it match your Nvidia Profile Inspector settings (so if you are trying 0x00000002 AFR, set this to Force alternate frame rendering 1, and if your are trying 0x00000003 AFR2, set this to alternate frame rendering 2) And that's it! Now keep in mind the settings above are far from the best, and are only a starting point for us. It is probably best to find individual game profiles for each title and go from there. I will be googling "Nvidia Profile Inspector <insert game here>" for a while and trying different settings out. Make sure you change the "NVIDIA predined number of GPUs" settings to TWO if you change profiles because in my experience it was defaulting to FOUR (this may be because I do have 4 physical cards installed on the motherboard, so if someone else gets different results please let me know). SOME CLOSING THOUGHTS I did some additional research which lead me to open up my motherboard manual. I discovered that in my case, my mother board PCIe slots change speed depending on a wide number of factors (for example, if I have a 28 lane CPU, some of my PCIe 3.0 slots (PCIE1/PCIE3/PCI5) STOP functioning in x16 and instead run at x16/x8/x4. If that wasn't a big enough kick in the nuts, since I have an m.2 SSD in my m.2 slot, my PCIE5 slot doesn't function at all). All in all, this was fun adventure for me, and I really hope this information helps people who are interested in trying SLI via VMs!
  8. 1 point
  9. 1 point
    That's the problem right? Something about your win10 PC is incompatible with our Flash Creator tool. Our flash creator needs to write the boot sector of the device, which is also one method to transmit malware, maybe there is a setting in Windows that is stopping this on your PC? - just a guess. I hope your experience with Unraid goes a little better. I think you find this a friendly and helpful Community.
  10. 1 point
    Just for completion problem would be using no trailing slash on source, that will create an extra folder.
  11. 1 point
    Either way works in this case. /mnt/disk2 or /mnt/disk2/ no extra folder will be created.
  12. 1 point
    Application Name: Nessus Application Site: tenable.com Docker Hub: https://hub.docker.com/r/jbreed/nessus Github: https://github.com/jbreed/nessus UnRaid XML Template: https://github.com/jbreed/docker-templates/blob/master/nessus/nessus.xml Please post any questions/issues relating to this docker you have in this thread. Note: The initial deployment will require the user to complete the registration process. You can obtain a free license by clicking on the Nessus Essential component and it will send a license you can use for a small home network. Feel free to submit pull requests on my GitHub, or discuss improvements in this forums as seen fit.
  13. 1 point
    You're having problems with disk1: Jul 22 18:08:56 Unraid kernel: md: disk1 read error, sector=2286221472 Jul 22 18:08:56 Unraid kernel: md: disk1 read error, sector=2286221480 Jul 22 18:08:56 Unraid kernel: md: disk1 read error, sector=2286221488 Jul 22 18:08:56 Unraid kernel: md: disk1 read error, sector=2286221496 Looks more like a connection problem, replace cables.
  14. 1 point
    So following on from the next cloud video, here is a tutorial that shows how to set up and configure a reverse proxy on unRAID It uses the linuxserver's excellent docker container Letsencrypt with NGINX. You will see how to use both our own domain with the proxy as well as just using duckdns subdomains. The video covers using both subdomains and subfolders. It also goes through setting up next cloud with the reverse proxy. Hope its useful Heres what to do if your isp blocks port 80 and you cant use http authentication to create your certificates. Also how to make a wildcard certificate.
  15. 1 point
    Yes it does. Parity1 doesn't care about drive order, but Parity2 relies on the disk assignments staying the same. If you correctly assign all the drives, you can check the option indicating parity is already valid, and just complete the parity check. If the drives are not in the same order, Parity2 must be rebuilt. Set a new config, and assign all the drives correctly. Does the historical diagnostics you recovered have exactly the same complete list of drives?
  16. 1 point
    If you're looking for your cheapest option, the answer would be a GeForce GT 1030. If you're wanting to do a little more than just browse the internet your next option I would have to recommend is a GeForce GTX 1660 Ti. It's a recent GPU that's more oriented for gaming but it'll it's my formal mid-range recommendation. If you're looking at Plex transcoding, I'd recommend the GeForce GT 1030. It's a low-profile low-voltage consumption at a mere 30W, compared to say the GeForce GTX 1660 Ti at 80W and the GeForce RTX 2080 Ti at a whopping 250W. SpaceInvader One made a wonderful video earlier this year about how to use a GPU for Plex transcoding. I'd recommend you go check it out.
  17. 1 point
  18. 1 point
    Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/drils If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/drils You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/driValue: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  19. 1 point
    You're only supposed to rename the files, not move them to different folders. Documentation is pretty clear on that.
  20. 1 point
    Dynamix cache dirs. The CPU usage does drop on that after time, but to be honest, you're always going to be best off excluding the appdata share in particular and only having your media shares included in it's list.
  21. 1 point
    Are you using a Shield to test playback by any chance? The best way to test I have found is using a wired PC with the PMP app https://www.plex.tv/media-server-downloads/#plex-app @Kaizac Try emptying the trash on your drives if you are getting the limit exceeded error rclone delete remotename: --fast-list --drive-trashed-only --drive-use-trash=false -v --transfers 50 Add "--dry-run" flag if you want to test drive first May also be useful to dedupe, particularly video files. rclone dedupe skip remotename: -v --drive-use-trash=false --no-traverse --transfers=50 And remove empty directories rclone rmdirs remotename: -v --drive-use-trash=false --fast-list --transfers=50 AGAIN --dry-run If you want to see what it will do first.
  22. 1 point
    Went to /boot/config/ - Deleted network.cfg and rebooted ...did not appear to work. I then did the same thing to network-rules.cfg and rebooted with not change again. I then hit the little down arrow next to bond0 and picked eth1...errors instantly disapperared. Went back to bond0 and it seems to be staying error free. Thank you for your help Benson.
  23. 1 point
    This. And where are you checking the cpu usage. Doing it via the GUI is naturally going to increase the CPU usage and is to be expected.
  24. 1 point
    how can i launch terraria server with mods?
  25. 1 point
    I guess this is the solution: Although I would say this is not exactly what I encountered, even without any container update or restart the default admin password still will not change using the webui. Anyway thanks for the heads up! The solution in the readme totally solved the problem it seems. Next time I'll rtfm.
  26. 1 point
    Thanks - those did not work because it was not even being recognized as LUKS. Thankfully I ended up fixing the issue. For whatever reason, when unRAID failed to mount the drive as cache, it overwrote the first 6 bytes of the LUKS header with zeroes. Thankfully, these bytes of the LUKS header are standard, so I used a hex editor and dd to correct them. Hopefully, these commands might help someone else encountering the same situation in the future: dd if=/dev/nvme1n1p1 of=/mnt/user/share/broken_header.bin bs=512 count=1 root@Tower:~# dd if=/dev/nvme1n1p1 bs=48 count=1 | hexdump -C 1+0 records in 1+0 records out 48 bytes copied, 0.0134741 s, 38.0 kB/s 00000000 00 00 00 00 00 00 00 01 61 65 73 00 00 00 00 00 |........aes.....| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 78 74 73 2d 70 6c 61 69 |........xts-plai| root@Tower:~# dd if=/dev/sdl1 bs=48 count=1 | hexdump -C 1+0 records in 1+0 records out 48 bytes copied, 0.00186538 s, 25.7 kB/s 00000000 4c 55 4b 53 ba be 00 01 61 65 73 00 00 00 00 00 |LUKS....aes.....| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 78 74 73 2d 70 6c 61 69 |........xts-plai| After fixing with a hex editor, using the bytes from sdl1 above: dd if=/mnt/user/share/fixed_header.bin of=/dev/nvme1n1p1 bs=512 count=1
  27. 1 point
    It was OK to do this. A format is just a form of write operation and the parity build process can handle writes occurring to the array while it is running. Such writes will slow down the parity rebuild process (and the write operation) but in the case of a format this would only be by a matter of minutes but larger writes have more impact. There is also the fact that if a data drive fails while building the initial parity the contents of the write could be lost. No harm is done. The reason is all about failure of a data drive while attempting to build the new parity. If the following conditions are met: The old parity drive is kept intact. No data is written to the array while attempting to build parity on the new drive. then these is a process (albeit a little tricky) where the old parity drive can be used in conjunction with the 'good' data drives to rebuild the data on the failed data drive. It is basically a risk/elapsed time trade-off and the recommended process minimizes risk at the cost of increasing elapsed time. This is only done if you already have parity fully built. It is not required if you are in the process of building initial parity (or have no parity disk). This is because you were running the parity build process. In such a case whatever is already on the disk is automatically included in the parity calculations so it is not a requirement that the disk contain only zeroes.
  28. 1 point
    Hey everyone, Happy Prime Day! Amazon has some good deals on cheap storage: WD 10TB Elements Drive for $159.49 And for those of you who don't want to shuck the drive and want the additional warranty, you can also get an 8TB Red for $156.99: WD Red 8TB Drive for $156.99 Note you may have to click "View Deals" when logged in to your Prime account. Enjoy the new storage!
  29. 1 point
    For a introduction to S.M.A.R.T, you can start here: https://en.wikipedia.org/wiki/S.M.A.R.T. As you read, you will find out that the system is not really that great of a predictor. But it can indicate if a disk is starting to show some indication that it might be getting ready to fail. Usually when a disk has failed catastrophically, you can't get a SMART report from it. Remember that the SMART is controlled by the disk manufacturers. They don't want to provide any information in such a fashion that would prompt a consumer to RMA a disk that might continue to function for months to years before it catastrophically fails. Years of Unraid user experience has shown that certain parameters are really useful to determining disk health and those should be monitored. You can find those attributes by going to Settings >>> Disk Settings >>> Global SMART Settings
  30. 1 point
    I'm seeing the same problem. When I reload the page the scan starts again, does that mean no problems were found? I've included log and diagnostics files just in case. athena-diagnostics-20190718-1054.zip athena-syslog-20190718-1056.zip
  31. 1 point
    Indeed, I have been using deluge v2 for just over a week, and it is running just fine. The thin client is working well - but I do run Linux for my desktop. I like all the new features - a welcome upgrade! I also use the CouchPotato docker and have it set to use Black hole to upload torrent/magnet files to deluge - it does everything I need.
  32. 1 point
    Application Name: UNMANIC - Library Optimiser Application Site: https://github.com/Josh5/unmanic/ Docker Hub: https://hub.docker.com/r/josh5/unmanic/ Github: https://github.com/Josh5/unmanic/ Unmanic is a simple tool for optimising your video library to a single format. Unmanic is developed in such a way that it takes the complexity out of converting your media library. The idea is simply to point Unmanic at your library and let it manage it. Unmanic carries provides you with 3 services: First, Unmanic has a scheduler built in to scan your whole library for files that do not conform to your video presets. Videos found with incorrect formats are then queued for conversion. Second, Unmanic provides a folder watchdog. When a video file is modified or a new file is added in your library, Unmanic is able to check that video against your configured video presets. Like the first service, if this video is not formatted correctly it is added to a queue for conversion. Finally, Unmanic provides you with a Web UI to easily configure and monitor the progress of your library conversion. NOTE: Unmanic is currently in beta. There is still a fair bit of development to go before I would consider it a end user ready product. As such please feel free to provide me with feedback on what features you would like to see added, keeping in mind the ultimate goal of Unmanic is to be a simple solution for average people to convert their video library. The Docker container is currently based on an Ubuntu image. So it will be quite bloated for what it is. I will migrate this over to a more streamlined Alpine based container before the application comes out of beta testing. Setup Guide: Setup according to the following image: For those wanting to access multiple libraries: ^ This will be replaced eventually with the idea of having multiple paths configured from the WebUI, but that is a while off yet.
  33. 1 point
    Brother you need to get a donate button going Sent from my SM-N960U using Tapatalk
  34. 1 point
    The rebuild of Disk 8 was successful while connected to the motherboard SATA port. I've now stopped the array, added the new 10TB Ironwolf as the 2nd parity drive, and restarted the array. The parity sync/build for Parity 2 is underway.... 20 hrs to go for dual failure protection to be active. LSI got back to me re: the pictures and although the card looks good physically, the labelling is good except for the serial number. The number is not in their manufactured database and is in a format that's wrong for this series of card. They've given me some instructions and tools that should work from a terminal session under unRAID to do some diagnostics. I will wait for the 2nd parity disk to finish syncing before doing anything else. I also got shipping confirmation on the replacement miniSAS cables and they could be here by the end of this week. Once they arrive I'll shutdown, swap out the cables, disconnect all unRAID disks and do some testing with the controller and the tools that LSI supplied. I have two spare drives that I can use for testing purposes, one 10TB and one 8TB which I can move between bays of the enclosure to run the tests on each of the 4 SAS/SATA connections per backplane. The only ones I'll not bother testing are the ones that are connected to motherboard SATA ports as none of them have ever reported errors. I may use them initially, since they have been reliable and would be a good indicator if they failed while attached to the LSI controller vs the MB SATA. When attached to the MB SATA ports, those drive bays have been error-free - that would almost definitely point to the LSI controller as the faulting unit. When I get my next disability check, I'm going to go ahead and order a new LSI card from the eBay vendor that guarantees a new-in-box unit from LSI. It won't hurt to have a spare even if the existing controller proves to be OK. More to come when I get some of the testing done.
  35. 1 point
    If you install the plugin it is simple enough Got to Settings->User Scripts use the option to add a new script and give it a name hover the mouse over the list of scripts and select the option to edit the script Enter the commands you want. In this case I think it is simply the ‘reboot’ command save the script set the schedule for when it is to run.
  36. 1 point
    Awesome feature I think
  37. 1 point
    if they are too old and pre-700 series, then at this time there is nothing you can do to my knowledge
  38. 1 point
    1) Yes, 760p suffering from the bug. 2) I have installed windows 10 directly on the NVME and it works perfectly. Unfortunately the only way to make it work with windows is to patch kernel since XML solution does not really work or buy another drive. This workaround works fine with linux guests but I was not be able to make it work with windows.
  39. 1 point
    Thanks! I changed my values to manual in my BIOS. I had the same voltage that Jay was having at "stock" settings. On an other topic, I contacted the person doing the asus-wmi-sensors project on GitHub. This is a project so linux can read the sensors. I wonder if that could be integrated into the CA System Temp plugin when i'll have it working !
  40. 1 point
    Currently unRAID uses basic auth to enter credentials for the web gui, but many password managers don't support this. Would be great if we could get a proper login page. Examples This kind of login page always works with password managers. This does not
  41. 1 point
    For those still having issues, download the latest version of Pulseway for Slackware. As of Pulseway 6.1, they added support for newer libssl, which seems to have fixed issues. May have to update your symlinks, as well.
  42. 1 point
    I’m having these exact same issues. It happened specifically when I upgraded to 6.7. Was getting ~800 Mb/s before the upgrade and am now getting 20-35 Mb/s. Both before and after the upgrade I’ve been using the Speedtest app from the community App Store. Have swapped out cables multiple times, and am seeing similar speeds from qbittorrent and sabnzbd containers.
  43. 1 point
    Application Name: Jackett Application Site: https://github.com/Jackett/Jackett Docker Hub: https://hub.docker.com/r/linuxserver/jackett/ Github: https://github.com/linuxserver/docker-jackett Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support.
  44. 1 point
    Post the entire diangnostics zip file Sent via telekinesis
  45. 1 point
    A simple file rename won't do the job. You need to convert the proxmox image to something useable by unraid. Check https://docs.openstack.org/image-guide/convert-images.html. I suggest using qcow2. After the convertion, create a new VM and make sure the disk format u use is qcow2 (or raw if you went that way). You don't have to start the vm, just create it. Once you've done that, replace the vmdisk in the vm folder with the one u just converted. You might have to run fixmbr/fixboot on the first startup so make sure you have a windows iso mounted.
  46. 1 point
    When your support post has been solved please edit your title post with (SOLVED) to let others know your issue has been resolved. For Example. Drive unresponsive ---> (SOLVED) Drive unresponsive As well if somebody just happens to have the same problem and does a search or sees it they might be inclined to see what was done to trouble shoot the problem so they might solve it the same way. Obviously if its a problem is still in progress and your still troubleshooting do not edit your topic.
  47. 1 point
    Yes, works fine. I have an unRAID box using the 2-port version of the same card, no problems. The 2-port is identical to the 4-port, apart from having half the socket parts missing.
  48. 1 point
    Yes It is that simple. Personally, I'm a fan however setting static IP's within the router and keeping all the computers set to DHCP, but the end result is the same
  49. 1 point
    My Win10 laptop started doing this a lot more recently. I am assuming some Windows update has prompted the change. I can resolve it by telling windows to log into the share with username of "\" and leave the password box blank. I hope that works for you.
  50. 1 point
    For completeness, then, after adding the three users and creating the "documents" share, the command line work goes something like this: mkdir /mnt/user/documents/brother chown brother:users /mnt/user/documents/brother chmod 700 /mnt/user/documents/brother mkdir /mnt/user/documents/mother chown mother:users /mnt/user/documents/mother chmod 700 /mnt/user/documents/mother mkdir /mnt/user/documents/me chown me:users /mnt/user/documents/me chmod 700 /mnt/user/documents/me Caveat: I haven't really tested this with SMB.