Herdo

Members
  • Posts

    101
  • Joined

  • Last visited

Everything posted by Herdo

  1. Nevermind. I think that was easier than I thought it would be. I found another post which said I just need to identify the parity drive and then re add them all. I've got them all back online now with sdc as the parity, and sdd and sde as the data drives. Doing a parity check now. For the future, is there a better way to shutdown the server when it times out like that? This is at least the third time this has happened to me. Luckily the first two times was before I had any data on the array when I was initially setting it up.
  2. I'm installing a new UPS, and when I rebooted my unRAID server I couldn't access the web gui (webpage was unresponsive) after about 20 minutes I hard shut down the server. Upon rebooting the server, I've found that none of the disks are assigned! The only thing left is the cache drive. Not sure what the safest way to proceed is. I know all the data is there, but I'm not sure if fixing this is possible. Any ideas? unraid-diagnostics-20160829-1508.zip
  3. I have noticed the problem with my Samsung smart TV (UNJS9000) which has a wired connection. I haven't noticed it yet on my Amazon Fire TV upstairs which is wireless. I'm going to start watching stuff through the Plex desktop client to see if it has the same problem.
  4. Yep, it doesn't seem to happen with your typical 23 minute or 43 minute episodes but I have a few shows that run about 50 minutes and it happens right at about the 45 minute mark. I'm using 6.1.9 by the way.
  5. Hey, thanks for the advice. All of my disks are already set to "Never" for the spin down delay. Should I be doing something different? EDIT: Whoops! Just checked and apparently I only set the parity to "Never" only, the others were at "Default". I could have swore I checked that myself, but I think I just checked "Disk Settings" not the individual disks themselves. Although, if I have "Default Spin Down Delay: Never", shouldn't the individual disks set to "Default" be using "Never"?
  6. I've got PMS running on a setup machine and I've got all the media stored on my unRAID server. I plan on eventually moving PMS to my unRAID server when I upgrade the CPU. Everything has worked fine so far, except this one problem. Every so often, Plex will randomly "pause" the media. Nothing actually freezes, or crashes, it just pauses. Clicking the "jump back 15 seconds" button immediately starts the media playing again and it then plays right on through where it stopped without any problem. It seems to happen more often towards the end of longer episodes, like around the 48 minute mark, usually a minute or two before the episode is over. It just happened again and I checked the logs to see if maybe something was interrupting the playback, but there hasn't been anything logged for a few hours. The reason I'm asking this hear is because I've changed nothing with my PMS install, other than changing the libraries from local directories to the unRAID user shares. Maybe it's some sort of network issue? Everything is connected via a wired 1Gbps network. EDIT: Oh and media bitrate/size doesn't seem to have any effect. This happened once while playing a 20Mbps movie and has happened several times playing 3Mbps SD television shows, all of which are directly streaming (no transcoding). And to be clear, this isn't stuttering or buffering, it just pauses it as if I pressed the pause button.
  7. Docker containers have almost no overhead, unlike virtualization since Docker containers aren't virtualizing an entire OS. I haven't used Plex specifically, but it should run just like having it natively installed. It's funny, I just built my system less than 2 weeks ago and I was hesitant to buy a decent CPU for running Plex. I already have an HTPC running Plex media server and I wanted to keep them separate, so I just bought an Intel G4400 for my unRAID server. Several people told me I'd eventually want to install and use Plex through unRAID, and here we are, less than 2 weeks later and I'm probably going to buy the Xeon E3-1240 v5 to move my Plex server over to unRAID haha. I love Docker! Also, I built an HTPC using the Node 304 and yes, you can mount an SSD on the outside of the brackets. Because SSDs have no moving parts, you could literally just tape them inside the case if you had to. I'm actually using 3M velcro to mount my SSD cache inside my unRAID server. There's no harm in doing this.
  8. I'm not too sure about having two separate cache drives. I know you can have multiple cache drives mirrored for data integrity, but that is different than what you are asking about. You are probably better off spending the $50 - $80 on a good 120GB - 250GB SSD. Sabnzbd (or any other program for that matter) are going to be run through Docker containers. You will point them to a User Share (your "Movies" for example), and in that user shares settings you will specify that you want it to use your cache drive. The file will be downloaded to the cache drive and then moved by a mover script at a predetermined time set by you. For example, you could tell it to execute the mover script nightly at 3:00 A.M. At 3:00 A.M. every night, it will move the file from the cache drive and place it onto the actual array. The file will still be usable and appear as part of the User Share called "Movies" whether it is on the cache drive or on the array. As far as you are concerned it won't look or behave any different. This is because you won't be accessing the actual disks themselves, but the "User Share". If you were to look at the actual disks themselves, then no, the file would appear on the cache drive and not the array drive, but it's recommended to use User Shares for this reason; it makes things easier. Watch these videos: https://lime-technology.com/getting-started/ Particularly the Disk and User Shares one. Look here: http://lime-technology.com/wiki/index.php/Hardware_Compatibility#PCI_SATA_Controllers Wait for someone else to come along and help with this, I am pretty new to this myself and am not sure which to recommend in your particular case.
  9. Hey again, Squid. I actually have a related question. I know mapping appdata to cache directly is a good idea, but what about my "cache only" user shares? I've got them set to cache only, so I doubt it matters, but I've currently got all the docker volumes mapped directly to the cache rather than "user". See the image below please. Is this OK? The stuff that is cache only (/downloads <-> /mnt/cache/MovieDownloads for instance) is mapped directly to the cache, however the stuff I want on the array is mapped to the /user/shareName (/movies <-> /mnt/user/Movies for instance). Based on my understanding of user shares, I doubt this matters because the shares on the cache are cache only user shares anyways. Also, I've noticed there is a /mnt/user/ path as well as a /mnt/user0. I have everything mapped to "user", but what is the difference? I've noticed that "user0" only contains my "Movies" and "TV" user shares, not "appdata", "TVDownloads" or "MovieDownloads". Everything has been working just fine with these settings, I'm just curious to know if I have things setup correctly. Thanks again Squid, you've been a huge help!
  10. Oh I know they will have different battery capacities, I just meant they should run the same drivers/firmware/etc.
  11. Thank you Frank1940. I was having the same problem with the APC models. It's hard to find any real differences.
  12. Ah OK, so everything works using the native UPS plugin, nice. I've done some reading and it seems like the entire PFCLCD line should all be the same, the only difference being their runtime. I'm thinking about the 1000VA version because I really don't need that much backup power. I literally just need it to shut off right away in the event of a power failure. The 1500VA version is practically triple the max wattage I would ever put on the system. Do you (or anyone else) know of any other differences between the two? Thanks again for the help!
  13. Thanks switchman. Is there any reason to not use NUT right now? It sounds like this plugin is already available, and from reading the comments it seems to work pretty flawlessly. http://lime-technology.com/forum/index.php?topic=42375.0 I was sold on the APC because of apcupsd, but with this plugin I don't think it even matters.
  14. Good thinking, thank you. It may be outdated, but at least at one point the apcupsd team recommened NOT getting the SMT750. I think that settles that.
  15. I'm looking to add a UPS to my new build. The APC UPS' seem to be recommended because unRAID can integrate with them easily. I know I want pure sine wave and AVR. Other than that though, I'm not sure what features I need or should be looking for exactly. I've used a PSU calculator to determine full load. In the calculation I've included what my unRAID server will eventually have. 12 HDDs, 2 SSDs, e3 - 1240v5, and I got 339W. I even rounded it up to 400W to be safe. From there I plugged in the load wattage into APCs UPS selector and I even added "20% Extra Power for future expansion", because why not, better safe than sorry. Two UPS' stood out to me. This first one is the SMC1000. 1000VA and 600W. It says "11 minutes of runtime". This appears to be APCs cheapest pure sine wave model. http://www.apc.com/shop/us/en/products/APC-Smart-UPS-C-1000VA-LCD-120V/P-SMC1000 And this one is the SMT750. 750VA and 500W. It claims 7 minutes of runtime. http://www.apc.com/shop/us/en/products/APC-Smart-UPS-750VA-LCD-120V/P-SMT750 I know the SMC models are considered inferior to the SMT models, but I'm not sure if the SMC is lacking features I may want. The SMC offers a longer runtime, but I'm not too concerned with that because all I need this to do is safely shutdown the PC in the event of a power failure. The prices are negligible; the SMC is $253 on Amazon, and the SMT is $263 on Amazon. So is there any reason I would chose one over the other? Both seem to supply more than enough power for a clean shutdown. Thanks.
  16. Thanks Squid, you caught my reply before I edited it, haha. I was confused because I hadn't seen the "Use cache disk:" setting when you create a new Share. I see now that you can select "Use cache disk: Only" when creating the Share to, well, only use the cache disk. You answered all my questions and even preemptively answered my edit, haha. That's good to know, thank you again Squid!
  17. Wow, thanks Squid! That was a ton of helpful info. I actually think I figured it out. I think part of the problem was that I had my settings>Global Share Settings>Cache Settings>Use cache disk: setting set to "off". I planned on just using it as a disk for Docker and downloading the initial files before Couch Potato and Sonarr moved them. I turned that back on stuff started to make more sense. This is amazing! Thank you; you've done a wonderful job with this. I was actually going to delete the Couch Potato plugin, but I started up the Apps and it recognized it as installed already. Is there any harm in leaving Couch Potato installed or should I start over? I'll definitely be using this to install plugins from now on. I just mapped the /config container volume to /mnt/cache/appdata/CouchPotato and it created the directories and now I can see a new share called "appdata". I then set the "appdata" share to "Use cache disk: Only" in the settings. I did the same for the /downloads volume and now I have a second share called "downloads" (and set to cache only). And then mapped /movies to my /mnt/user/Movies share. Thanks again for the help Squid! EDIT: Ohhhhh... I think I get it. I CAN create a new User Share called "appdata" on the cache drive if I click "Use cache disk: Only" when creating it. OK, that makes sense. Well, is the way I did it improper? Should I delete my "appdata" and "downloads" shares and start over?
  18. I'm really confused on how to setup Docker. I've been following the LimeTech Official Docker Guide, and a lot of it doesn't seem to make sense. First it tells me I need "A share created called “appdata” that will be used to store application metadata." Is this a User Share? What if I want to store it on my cache disk? Then further down it tells me to place the Docker image at /mnt/cache/docker.img. OK, done. Now I'm trying to add Couch Potato. I've added the linuxserver.io repository and I'm t trying to map the volumes. The problem is, the "config" directory in the screenshot doesn't exist on my server: /opt/appdata Am i suppose to just create this directory? I swear that I read something to the effect of "an appdata folder will be created by the system." I can find anything like this anywhere when SSHed into the server. I have checked every directory using ls -al. Also, the next volume mapping says "/mnt/user/bysync". Isn't /mnt/user where User Shares are located? Am I suppose to create a User Share for each Docker app? I found this post which seems to make a bit more sense, but I'm not sure what to do exactly: http://lime-technology.com/forum/index.php?topic=40937.msg387520#msg387520 I'm just really confused by all of this.
  19. I've gotten this error twice now during parity sync: array health report [FAIL] It doesn't seem to effect anything, and I've checked the SMART attributes on both drives and they both look find. Is it just saying this because there is no parity yet?
  20. Thanks John, rebooting seems to have solved the issue. I created a diagnostics file anyways and I will upload when the parity sync is finished.
  21. Thanks John_M. I started the array, but now the webui seems to have timed out or something. I just get the "spinning circle" loading indicator on the tab and at the bottom left of the webui it says "Spinning up all drives..." Is this normal? Refreshing the page and trying to open up the webui in a new tab or browser isn't working either. EDIT: It became responsive again, but the array was still stopped. Nothing seems to have happened. Trying it again now.
  22. Preclear just finished on both of my disks. 3 passes total. Just want to verify that there were no problems before I add them to an array. sdb ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdb /tmp/smart_finish_sdb ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Temperature_Celsius = 162 206 0 ok 37 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 3. 0 sectors were pending re-allocation after post-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 2 of 3. 0 sectors were pending re-allocation after post-read in cycle 2 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 3 of 3. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. root@unRAID:/usr/local/emhttp# sdc ========================================================================1.15b == HGSTHDN724040ALE640 PK1334PEJNRD1S == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdc /tmp/smart_finish_sdc ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Temperature_Celsius = 162 206 0 ok 37 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 3. 0 sectors were pending re-allocation after post-read in cycle 1 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 2 of 3. 0 sectors were pending re-allocation after post-read in cycle 2 of 3. 0 sectors were pending re-allocation after zero of disk in cycle 3 of 3. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. root@unRAID:/usr/local/emhttp# Thanks again everyone for the help, this forum is great.
  23. Ah OK, that makes sense. Thank you John_M.
  24. I'm doing 3 passes of preclear on my first two disks and it just started the post-read phase on the third pass. I just noticed on the Dashboard that my CPU usage bounces around between 80% - 100%. Is this normal? I have an Intel G4400. Also, is there some kind of report generated when preclear finishes, or am I to assume the disks "passed" if the actual preclear didn't fail at some point? EDIT: Just SSHed in and ran top. User and System usage are both low, but idle is effectively at 0% (usually between 0% - 5%). I noticed the IO-wait is very high, is that possibly why unRAID thinks my CPU is being utilized so heavily? It's counting IO-wait as usage? The preclear_disk.sh process itself is showing 0% CPU and 0.1% Memory.
  25. Xeon 100% I literally just built a nearly identical machine (same motherboard even; X11SSM-F) and was looking at either the i7 vs the Xeon. The 1245 v5 is unnecessary because the only difference between it and the 1240 v5 is the built in CPU graphics, which you won't be using for unRAID. The 1240 v5 is actually cheaper than the 6700, and has a better passmark score. http://pcpartpicker.com/product/zDcMnQ/intel-cpu-bx80662e31240v5 Also, I would not buy Kingston "Value RAM". This Crucial RAM is tested and compatible with the X11SSM-F and of much higher quality (I have it in my system right now). It's only a few about $9 more per stick. Considering the Xeon is actually $20 cheaper, even with the Crucial RAM the Xeon build is actually cheaper than the i7 build. http://www.newegg.com/Product/Product.aspx?Item=1A0-00CZ-000C5