jebusfreek666
Members-
Posts
229 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by jebusfreek666
-
I think I remember seeing somewhere that the trim plugin, and scheduling trim is no longer necessary or recommended after upgrading to 6.9. I am not sure if I actually read this, or not. Hoping someone could verify this info for me.
-
I figured it out just now. After the update to UnRaid fried my container for some unknown reason, I lost the extra parameter --runtime=nvidia. In fairness though, I followed the steps in Q3 of the FAQ you have posted and this was not mentioned. You might want to add it to avoid more of the same questions from popping up. Thanks alot for the response and offer to help though. As always, you are very much appreciated!
-
I also switched from 6.8.x to 6.9.2. When I did, it had the binhex-plex docker as an orphaned image. I redownloaded the docker and had to input my GPU info again, along with change where my media was in the template. I do have the new Nvidia driver plugin and deleted the old one. I have restarted the docker, docker engine, and the server multiple times now but still can't get hardware acceleration to work. I am not sure where to go from here.
-
I am now starting to change over to encrypted xfs for my filesystem. I have seen spaceinvader one's video on this and think most of it will be easy. However, I also needed to add 2 drives to my array. So I set the default file system to encrypted xfs, stopped the array, and clicked to format the drives. That all went fine, but when it came back up it stated that the drives were unmountable because they are missing the encryption key. I had assumed it would ask me to input a passphrase during the initial format, but it did not. Now I have poured through the settings and can't seem to find where to set the passphrase. Please help if you know where to do this. I don't want to restart until I have this sorted. EDIT: Nevermind. I stopped the array and saw it on the bottom of the main page. All squared now.
-
Supermicro MOBO, fan speed profile
jebusfreek666 replied to jebusfreek666's topic in Motherboards and CPUs
Supermicro X11SSM-F. I have had it for about 3 years or so I think. -
I have to set my fan speed profile to optimal after each start-up/reboot. Each time it change back to full speed. Does anyone know how to make the change persistent? I'm already tired of going into ipmi every time I turn on the server.
-
All drives with errors during parity check
jebusfreek666 replied to jebusfreek666's topic in General Support
Spoke too soon I guess. I got 1 more crc at the 1hr 5 min mark. I did run the parity check for 2 hours and didn' get any more errors. So I am sure that it is something to do with the cables. I am leaning towards it being the cross talk. The cables I am using a longer than I need, they are the 1m variety. So even after freeing them from zip ties and rerouting them, there is still places where they over lap, like where I have them doubled over themselves due to the excess length. I have already ordered 0.5 m cables, so hopefully that will resolve this issue entirely. -
All drives with errors during parity check
jebusfreek666 replied to jebusfreek666's topic in General Support
Well, the parity check completed. I had 0 sync errors and parity is valid. Each one of my drives got the crc errors, but only between 3-12 on each. I shut down the server, removed the cables entirely, cut the zip ties holding them together, and reinstalled them making sure they were seated correctly and there was as little over lap as possible to avoid cross talk. I then restarted the server, and acknowledged each of the drives errors and began another non-correcting parity check to try and force more of the crc errors. It has been running for an hour now, and I haven't gotten any errors. Last time the errors started kicking up in the first 15 minutes. I think I will let it run for another hour just to be sure, but it appears that either poor seating or cross talk was my issue and it has now been resolved. Thanks guys for all your help! -
All drives with errors during parity check
jebusfreek666 replied to jebusfreek666's topic in General Support
I just had a thought and wanted to check if this might be causing the issue. Previously, these drives were in my tower server with SAS to 4x SATA breakout cables. These are shucked drives and I could not get them to work without taping the first few pins (3.3v issue). When I swapped the drives over to my new 846 case, I did not remove the tape. I am pretty sure that the backplane does not have this issue, so the tape is not needed. Is there any possibility that the tape is causing these errors? I assume not, as that is not on the data transfer side, but I just want to make sure it is not something stupid that I may overlook before I start ripping out hardware. -
Is there anyway to setup an automated process to delete files older than a set number of days from one specific share? My use case would be the following: Have 24/7 CCTV footage written to separate cache pool Have mover transfer files to the array once a day to save on continuous writing Keep footage archived for 30 days, then deleted automatically This way, I would always have a month worth of footage I could go back to at anytime. Is this possible? I feel like this is probably a user script kind of thing, but I have little to know knowledge of how to use scripts let alone write them. Thanks
-
All drives with errors during parity check
jebusfreek666 replied to jebusfreek666's topic in General Support
Wish I would have known that before I left for work! Oh well, I guess it wont hurt to have it wait until the morning. Still, I hate all this down time. -
All drives with errors during parity check
jebusfreek666 replied to jebusfreek666's topic in General Support
During parity check is the only time I have gotten these errors. It is dual parity, both of which also show the same errors. Not sure what you mean by the correct order, but it was the same order as the old server. I really hope it is not the backplane. That would be disasterous. I will report back after the parity check finishes, and I have adjusted all the cabling. -
All drives with errors during parity check
jebusfreek666 replied to jebusfreek666's topic in General Support
I am doing an error correcting parity, so I will be waiting until tomorrow morning. -
Hey guys. Just finished swapping most of my hardware into a new case. I am upgrading from my trusty Define R5 to s Supermicro 846 for greater expandability. Previously, all my HDD were attached to an LSI 8i card (can't remember exactly which one) with sas to 4x sata breakout cables. Now it is sas to sas for the backplane. I accidentally ordered the 1m cables which are too long for my purposes, so I may just order the 0.5 cables and swap out anyways. I am wondering if the cable length could be playing a part in this issue. I am currently running a parity check (hadn't run one in 3 months prior). I did recycle known working hardware, but all the cabling was replaced with brand new cables. I am about 1/3 of the way through with the parity check and am kicking up udma crc errors on all of my drives. I know there is no way all the drives went bad at the same time. And I have to imagine that if the issue was the LSI card, then even without the parity check being run I would have had at least 1 of these errors pop up in the last 3 months right? From reading through old threads on this site I reckognize that this has to do with data being sent through the cable going in one end and not coming out at the other end. I will of course reseat the sas cables and cut the zipties that are loosely binding them together for cable management once the parity check has completed, but from there it gets a little murky for me. Should I run another parity check after I have made these changes to ensure that the error was corrected? I can't imagine it is an issue with a bad cable since they are all brand new. But I suppose, since there is only 2 cables from the HBA to the backplane, it would only take 1 defect in one cable to cause the issue. If I do reseat the cables and rerun parity check and still have these issues popping up, I will first order new sas cables and try them out. But if that ends up failing too, that leaves me with 2 options right? Either the HBA which was fine went bad, or the new to me backplane has an issue? I would like to rule out the backplane as I just got this case off of ebay (from a reputable seller) and would like to be able to report to them if there is an issue with the backplane.
-
Mover/Mover Tuner behavior with multiple cache pools
jebusfreek666 posted a topic in General Support
I have not yet updated my server to 6.9.x as I am in the process of a major migration or hardware, but I had a question as to the functionality of mover with the addition of multiple cache pools. Is mover still going to be an all or nothing proposition, or will you be able to set it up differently for different cache pools? What I mean is can I set my downloads cache pool to be moved once a day and set my CCTV cache pool to run once an hour? If this functionality will not be baked into UnRaid itself, is there any plans for @Squid to add this to mover tuner? Not even sure if this is possible, just thought it would be a nice addition if it was not already thought of or already available. -
Thanks for the info. I definately want to maintain protection while I am doing it. So I guess I will go the long way around. I remeber doing 2 drives at a time before when I was upgrading and I thought that since I had the 2 original drives still intact it portected me against another dropping while I was doing the rebuild as I could just throw the originals back in to rebuild the one that died? If that is not the case, then I will do them one at a time. If you could shoot me a link to the info after you update it I would really appreciate it! Thanks for the hard work!
-
I currently have 8 data drives and 2 parity drives. All drives are 12Tb except 2 of the data drives are 6Tb. I would like to replace both of the 6Tb drives with 12Tb drives. I thought I read somewhere that I could do that, and maintain parity the entire time so it would not require a parity check, but I can't find that info again. If someone could point me in the right direction, I would appreciate it.
-
Yeah, I will have all the spinners in the front 24 bays going through the HBA. All the SSDs will be direct connect to the MOBO Sata ports.
-
Somebody on reddit said that on average it is 30-80Mb per file. Then you figure out a rough number by multiplying that by the total number of video files you have (movies + episodes). For me, that ends up being 12,500 files as it stands now. 12,500 * 80Mb (worst case) would be 1,000,000Mb, or almost 1Tb already. While I feel this is probably way on the high side for my current collection, it does point towards using the 2Tb for plex I think. If I remember correctly when my collection was about half its current size and I had thumbnails enabled, I think it was using about 300Gb. This would put me in the 600Gb range as it stands currently. So my best guess is a pretty wide range of 600Gb - 1Tb as it stands. Either way, I think it is probably best to use the 2Tb here. Unless someone else can chime in with first hand knowledge.
-
I am in the process of a complete re-haul of my server to increase current storage and expandability. I am transitioning to a CSE-846 case, and will also be updating some of the hardware. Currently for my Cache I am running 2x Samsung Evo 850 500Gb SDDs in Raid1 that house everything I cache (downloads/file transfers, Docker app data, VMs). I would like to take advantage of the new multiple pools feature in Unraid 6.9 after I update. I also have on order a Samsung Evo 870 1Tb, and a Samsung 870 2Tb SDD. I am probably going to do the pools as follows: Plex app data (with metadata and full video preview thumbnails) Deluge DL's and File transfers Docker app data, and VMs The vast majority of my server is filled with Movies (approx. 2500) and TV shows (approx. 10,000 episodes) and it is, above all else, a Plex Server. I am currently at 96Tb total array, with about 80% full. I will be expanding this very shortly. And with the addition of starting a 4k UHD collection soon, I could see the server array doubling in short order. My question is, which drive should I use for each pool? I am leaning towards doing it like this: Plex app data -> Evo 870 2Tb Deluge DL's and File Transfers -> Evo 870 1Tb Docker app data and VMs -> 2x Evo 850 in Raid1 I'd like to use the 870's for DL as they are brand new, and have way better longevity when it comes to repeated writes. I am just not sure if my plex app data will grow to the point where it would be worth it to use the 870 2Tb once I enable video preview. I had it before when my server was half the size it is now and it filled my cache pretty quickly, but again that was only a 500Gb that held everything. So I guess, if anyone has a plex server around 200Tb and could comment on the size of the plex App Data folder with video previews that would be super helpful! Anyway, does this seem like the optimal layout or am I doing something dumb?
-
@binhex any idea on the above issue? Here is what the log says: [v3.0.6.1196] System.Net.WebException: The operation has timed out.: 'http://192.168.1.69:8112/json' ---> System.Net.WebException: The operation has timed out. at System.Net.HttpWebRequest.RunWithTimeoutWorker[T] (System.Threading.Tasks.Task`1[TResult] workerTask, System.Int32 timeout, System.Action abort, System.Func`1[TResult] aborted, System.Threading.CancellationTokenSource cts) [0x000e8] in /build/mono/src/mono/mcs/class/System/System.Net/HttpWebRequest.cs:956 at System.Net.HttpWebRequest.GetResponse () [0x0000f] in /build/mono/src/mono/mcs/class/System/System.Net/HttpWebRequest.cs:1218 at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x00123] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:81 --- End of inner exception stack trace --- at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x001bb] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:107 at NzbDrone.Common.Http.HttpClient.ExecuteRequest (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookieContainer) [0x00086] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:126 at NzbDrone.Common.Http.HttpClient.Execute (NzbDrone.Common.Http.HttpRequest request) [0x00008] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Http\HttpClient.cs:59 at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.AuthenticateClient (NzbDrone.Common.Http.JsonRpcRequestBuilder requestBuilder, NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings, System.Boolean reauthenticate) [0x0005b] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Download\Clients\Deluge\DelugeProxy.cs:295 at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.BuildRequest (NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings) [0x0006d] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Download\Clients\Deluge\DelugeProxy.cs:203 at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.ProcessRequest[TResult] (NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings, System.String method, System.Object[] arguments) [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Download\Clients\Deluge\DelugeProxy.cs:210 at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.GetVersion (NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings) [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Download\Clients\Deluge\DelugeProxy.cs:53 at NzbDrone.Core.Download.Clients.Deluge.Deluge.TestConnection () [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Download\Clients\Deluge\Deluge.cs:228 21-4-8 17:56:38.3|Warn|SonarrErrorPipeline|Invalid request Validation failed: -- : Unknown exception: The operation has timed out.: 'http://192.168.1.69:8112/json'
-
Same issue here. Just updated and no longer can connect to deluge or jackett. Testing deluge in sonarr kicks back an error on the top of the test page that says : Unknown exception: The operation has timed out.: 'http://192.168.1.69:8112/json'
-
Having an issue with plex not playing certian shows/files. Others play just fine. It seems to be an audio transcoding issue. Plexweb console shows this error over and over while trying to load a video: [Transcoder] [eac3_eae @ 0x72be00] EAE timeout! EAE not running, or wrong folder? Could not read '/config/transcode/pms-95b2194a-a9e2-4e99-ac6d-c8ad804e6945/EasyAudioEncoder/Convert to WAV (to 8ch or less)/bf06286f-1d71-4930-947e-a38b2a08d3dd_2440-0-452.wav' So it seems to be an issue only with EAC3 transcodes I think. Also, I am using an Nvidia Quadro P2000 for transcoding (video only I'm pretty sure). Transcode is currently mapped to /config/transcode with appdata set to prefer cache. Not sure how to fix this. I have changed nothing in the setup for about a year, other than updating to new builds. Just updated a day ago I think, so it might be related to that. Edit: While waiting for a response, I did a little searching and found others with a similar issue. I ended up finding one person who said that if you stop the docker, then delete the contents of the codecs folder it forces plex to redownload the codecs when you start it back up and try to play something with the offending codec which can resolve the issue. Happy to report this solved the issue for me, in case anyone has a similar issue.
-
I am looking at running my IP cams on my server. What I would ideally like to be able to do is dedicate an external HDD with unassigned devices to just hold the recordings completely separate from the array. I am looking for an option for a wall mounted (or possibly recessed like a wall safe) strong box with pass throughs so I could still run power and data cable to it. This way, if someone were to come nab my server, the drive with the recording of what happened would be left behind. I am not even sure if what I am looking for exists. Anyone know any possible solutions?
-
Thank you sir. That did the trick.