bnevets27

Members
  • Content Count

    570
  • Joined

  • Last visited

Community Reputation

24 Good

About bnevets27

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I agree and disagree. I personally currently have more ram then I know what to do with so I could care less how much ram unraid uses. BUT, I'm also looking at changing that in the future and I do also help with builds for others with different requirements and I am looking to do a straight stripped down, lowest specs just NAS type build. So I do see the need to be conservative also. At the same time minimum specs builds are getting to have more memory then they used to, but that is still evolving. I've even noticed now on an old machine older version of unraid that were lighter run better the
  2. I'm trying to use this script to mount multiple shared drives. From everything I've read here I should just be able to run multiple version of this script. But when I do I get the following error: Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use I have rebooted. I'm only mounting the mergerfs in one of the versions of the script. Whichever script I run first will work but I can't get a second to work due to the above error. Any clues?
  3. Thank you both @trurl and @itimpi . Just to confirm. I can pull my parity 2 drive and my data drive out, swap them and then do a parity copy (Step 13). During this procedure, I still have dual parity and I'll only really have one "failed" disk? So if I understand correctly. Unraid knows the data disk is actually the old parity 2 disk, it then copies the old parity 2 data to the new parity 2 disk (being the old data disk). Once done I then have parity 1 in tack as it hasn't been touched. Parity 2 (new) has been copied from parity 2 (old) and parity 2 (old) still has the parity 2 dat
  4. That's kind of confusing advice as the "third drive" is the one with increasing reallocated. But it's not part of the array as I had said above. It's a spare drive sitting on the shelf. That I could temporarily introduce to the array to aid in this swap. Because as you said you would need a third drive to maintain parity. Though technically with 2 Parity drives I probably could maintain Parity without needing a third drive but I wouldn't be maintaining dual Parity during the procedure. I figured I could do that but wouldn't that be risky? At that point if any drive fails I don't think I would
  5. Basically the same reason and situation as the OP but I have dual Parity, don't think the OP did/does. I have a data disk that would be better used as a parity due to it being faster. I was interested in the procedure to make that swap/move. And in this situation there is no other available drive. Situation 2. Still looking for the same outcome but with a third drive that could be used temporarily during the swapping/moving. Unfortunately the only drive I have around isn't in great shape (increasing reallocated sectors) but it still functional and passed preclear and smart testes. It's not
  6. Hope the OP doesn't mind me hijacking this thread. But I have basically the same question/situation. I have a data drive I would like to swap to my Parity 2 location. Sounds like there isn't away to do this without at least loosing one Parity drive during the process. What would be the safest way to do this, without a spare drive? What would the process be with a temp spare drive? I have drive that isn't healthy so I would want it in the array for the shortest period of time. The procedure with the temp spare drive I think is simple enough but there would be 3 rebuild/Parity checks which
  7. Ever since I started using unraid (2010, v4.7), the community is what has been the leader in adding enhancements/value/features to unraid. Initially unraid didn't do much more than being a NAS, which was exactly what it was designed to do. But the community was the ones who started to build plugins and guides on how to use them to add way more functionally then the core function of just a NAS. That's not to say limetech was lacking really. They did and do build a rock solid NAS and that was the motto for the longest time. But a lot of user, and definitely myself wanted to keep utilizing the ha
  8. True but the successful backup that it completed is unfortunately not a backup of a full working system. And in this case basically blank. I was lucky I did a manual for copy at the start so it wasn't a complete loss. I usually also keep (by renaming the folder) a permanent backup that CA backup/restore creates once in a while copy in case I don't catch something that's gone awry within the deletion period, so I can go further back if necessary. But yes with a delete backups every 60 days and say a minimum of 2 backups kept it would definitely keep a good backup long enough. In that same r
  9. Short version: If the server has been off for longer then the number of days set in: "Delete backups if they are this many days old" and CA Appdata Backup / Restore runs a backup on a schedule, it will delete all the backups. Solution: Probably a good idea to have a setting for minimum number of backups. How I came about this issue. My cache got corrupt and my dockers stopped working. Since I didn't have time to fix it and didn't really need my server running when the containers weren't working I shut it off till I had time to work in it. While working on it I've le
  10. Don't have too powerful of a server but got some of my CPU doing work, along with my 1050Ti.
  11. ^ That's all about right. But it's basically up to cost/performance. For the most part, using up more slots the cost less and has the best performance . For example: 3 x IBM M1015 (LSI 2008 chipset) Max available speed per disk is 320MB/s. Cost is about say $90 ($30/each) But uses 3 slots. 2 x IBM M1015 (LSI 2008 chipset) + 2 Intel RAID SAS Expander RES2SV240 Max available speed per disk is 205MB/s Cost is about $300. $60 for 2 M1015 and $240 for 2 RES2SV240. Uses 2 slots, the RES2SV240 can be powered without using a slot. 2 x LSI 9207-8i chipset + 2 Intel RAID SAS Expander RES
  12. Hard to pick one thing for both so two things that tie for number one. 1)ease of use (for the most part) much easier then it was years about. 2)unraids core feature, redundancy with any drive added at any time of any size/manufacture etc. Again a tie for what I would like to see in 2020. 1) easier/built in server to server backup and backup to cloud storage. 2)multiple cache pools (which I know is a planned feature)
  13. Just read the work that went into releasing the latest version. Thank you to all that are involved in keeping this amazing addition to unraid working. I've been enjoying the benefits of it ever since it was released and as we all know how great it is to have this ability. I really appreciate it. I searched back a bit and didn't see anything recently about the work being done on combining the Nvidia build and the DVB build. That would be the next dream come true. Is there any place I can follow the development of that build? Seems like it's getting lost in this thread.
  14. Thanks Johnnie. That makes sense but I had thought I had not see it shown like that (both disks as "new"/blue) the first time I did the parity swap/copy. From what I've gathered the copy operation happens then the array stops and then you have to bring it online to rebuild. So during the copy, the old parity (which in this case is now in slot 15) is writing to the new parity in the parity 2 slot. Disk 15 during this operation isn't getting written to. Now after the copy operation happens the array needs to be started, when the array is started that's when the data on disk 15 is ove