jebusfreek666

Members
  • Posts

    226
  • Joined

  • Last visited

Everything posted by jebusfreek666

  1. Is this a fake knock off then? Amazon has it listed for $671. This thing looks just about perfect for what I want.
  2. You mentioned that these drives were shucked Seagate Backup Plus? My friend was using drives similar to these with the 3.3v issue. Long story short, he had drives dropping off like this as well (not sure on the diagnostics on his side though). We figured out in his case it was due to cheap electrical tape with imperfect installation over the pins. After heating up sufficiently while spun up a long time (like during parity checks) they would slightly change shape and cause the drive to trip off. Probably not your issue, but figured I would mention it just in case. Also, I am in the planning stages of building a large storage server like yours. I was wondering if you could tell me what case you are using, and what you think about it? Been back and forth with a couple different options.
  3. Is there a way to prioritize certain dockers over others in unraid for larger shares of the UL/DL bandwidth?
  4. I am having an issue where my DL speeds fluctuate wildly. bouncing around between 30 Mib/s and a few Kib/s. It fluctuates a lot, which I can live with. The biggest issue is that it will look like it is running well, then all of the sudden drop to 0. Then all of the torrents have to build speed back up again. Any thoughts on where to start looking?
  5. Recheck your credentials for the vpn first. Are you using PIA?
  6. I have several duplicati backups set up to run, and the size is changing a lot. So I have no real way of knowing how long it will run for. That Saturday would be the last full day before the next backup was set to run. (I have 12TB dual parity, so it takes a day) They are set to run every 2 weeks. So, basically I have 3 smaller ones that will run Sunday night into Monday morning, Then I have set aside 3 days for the medium one. Which leaves 9-10 days for the largest. If it takes the 9 days, then on the 10th day I could do parity checks, before the backup cycle starts again. I don't think it will take the entire time, but I don't know for sure. My UL speed sucks. And it depends on how much new data has been written since the last time it was run.
  7. @itimpi I do not know if what I want to do is possible, but it was suggested to me in another post that this plugin might allow it. I would like to schedule a parity check every 4 weeks on Saturdays. This would allow me to set it up on a schedule that would not interfere with duplicati's upload schedule. I know they can both run at the same time, but I have been having issues with duplicati hanging, and am trying to reduce any possible interferences. I know there is options for weekly that I can set up as first, last, second, third, fourth, but I think that is referring to the weeks in the month not the number from the last run. What I mean is that if I select weekly and 2nd week it would be the second week of each month, not every 2 weeks. I want it to run specifically on Saturday every 28 days. Is this possible?
  8. Thanks for the suggestion, but from what I have read on the plugin, it will not work for what I am asking for. According to the info I have found, the plugin still uses the main built in parity check scheduler and just allows pausing and resuming. It does not appear to allow me to set it up to run every 28 days like I want. unless I am completely missing this?
  9. I do not know if what I want to do is possible, but here it is. I would like to schedule a parity check every 4 weeks on Saturdays. This would allow me to set it up on a schedule that would not interfere with duplicati's upload schedule. I know they can both run at the same time, but I have been having issues with duplicati hanging, and am trying to reduce any possible interferences. I know there is options for weekly that I can set up as first, last, second, third, fourth, but I think that is referring to the weeks in the month. I want it to run specifically on that Saturday every 28 days. Is this possible?
  10. Yeah, I bit the bullet and just did it. I got stuck on adding the plex info, but read through the other guys issue here and reverted back to just my local ip and it went through after a few tries. it is now doing the library scan. Might want to add the possibility of having to put just the ip in the readme, as all the examples given show the web/index.html and nothing about just using the ip. Unless I just missed it several times.
  11. I'm stuck on the first part of setting up gaps. Is getting an API code really this complicated with tmdb? I have to give them my name and address, an application name and url, etc? That seems a bit extreme.
  12. Damn, that's pretty. On wheels too! Is that the Rosewill case or Norco?
  13. Cool, thanks @itimpi. As always, super helpful!
  14. Right, I get that and will be clearing it first and moving the data to other disks on the array. But since parity will be valid for parity 1, but not for parity 2 should I not click the "parity is already valid" option and just rebuild after? Or do I click that it is valid (half the parity disks are?) and then still rebuild because parity 2 wont be.
  15. So, just to make sure I do this correctly, after I move all the data off the disk I go to new config and click retain current configuration. After that I move disk 7-9 to be disk 6-8, and just leave the disk that was 6 unassigned? Then apply and do not click the "parity is already valid" option. After that, just restart the array and let it rebuild parity?
  16. I have read the wiki, and will be removing a disk from my array in the near future. My plan is to leave one hot swap slot free. I would like to remove disk 6 of a total of 9 disks. The reason I want to remove this specific disk is that after upgrading most of my drives from 6TB to 12TB, I am left with an odd order in which some disks are 6's and some are 12's. What I mean is disk 1-5 are all 12TB, disk 6 is 6TB, disk 7 is 12TB, and disk 8&9 are 6TB. If I remove disk 6 (which is in a hot swap bay) I would be left with the larger capacity on top, and the smaller capacity (which I also plan on changing as soon as the 12's go back on sale) on the bottom. My question is, if I follow the "remove then rebuild parity" method, will unraid renumber my disks, or will it leave a hole? Will it show disks 1-6 as 12TB, and 7&8 as 6TB. Or will it leave a hole where disk 6 was and show 1-5 as 12TB, disk 6 as an empty spot, 7 as 12Tb, and 8&9 as 6TB. And if it will leave a hole in the disk 6 slot, while I am assigning disks again can I just change the number of disk 7-9 to be disk 6-8 before rebuilding parity or will that cause problems?
  17. I currently have a Define R5, which holds 8 HDD natively. I have added a 3 in 2 hot swap cage in the upper 2 5.25" slots, giving me a total of 11 drives. I am currently looking for a solution to how to add more drives to this case. Idealy, what I would like to do is add an upward pointing fan to the bottom of the case, just behind the rack of drives. Then, over that mount some kind of rack that will hold 3 drives vertically, so the bottom fan is blowing through them, as well as the front fan. I am not sure if a rack or bracket or whatever even exists like I have described. If anyone has a hint, that would be great! Edit: Looks like someone did almost the exact thing I am looking for here, but with the R4 and no fan.
  18. I have only built one system so far, so I will let other chime in as they will be far more capable than I to point you in the right direction. But I was wondering if you have a GPU in your current setup capable of handling the transcoding? Seems to me it would be cheaper and easier to switch to (or add one) the GPU if general slowness during transcoding is the biggest issue at the moment.
  19. I have kind of been wondering about his original question too. I currently have a define R5 stuffed to the gills with drives. Right now I am swapping for larger capacity drives. But it got me wondering, if I fill these is there a way to fill another tower and add them to my original unraid server? I mean, I guess I could just drill holes for the sata cables and run a separate PSU, but that doesn't seem very elegant.
  20. I would kill for a symetrical 500/500 service. I recently decided to upload all my media to my google drive with duplicati cuz, well why not... I had to upgrade my service to the fastest spectrum offers. They call it gig, but it is "up to" 940 dl (I routinely get about 800 Mbps). I did this just so I could up my UL speed to a whopping 35 Mbps. Utterly pathetic. Constantly running. it took about 9 months! But it was funny.... Really want to share my media with my family so they can ditch cable, but with 35 Mbps UL, that is not going to happen. Unfortunately, spectrum is the only game in town.
  21. This could be a cool add on to the web ui. While Krusader is useful, it does have its kinks. A file manage with the ability to open text docs, pics, video, etc would be cool. But even baring something that heavy, just the ability to move files or at the very least rename them would be great. Having to fire up Krusader just to rename files that I had to source elsewhere just so sonarr/radarr can then rename them again so plex can see them is a pain with how finicky krusader can be.
  22. Running 2 data disks at the same time has taken a major dump. Currently speeds are sitting around 105 MB/sec at 50% completion. Estimated time for completion is around 27 hours. Edit: Once it crossed the 50% threshold, speeds improved to around 170 MB/sec. I am guessing that it was on the inner tracks when I looked. New estimated time is around 20 hours, so it looks like the data disks will end up being about the same as the parity disks.
  23. Not yet, but it has been set on the back burner at the moment. I am currently swapping all my HDD for larger ones and looking at a rebuild of about a week. If I do find anything out, I will post it here. I had read somewhere that someone else was having a similar problem and they got around it by ignoring the first tab, the global settings, and just made specific rules for each label they had on the second tab. That was what I was going to try next.