Jump to content

bnevets27

Members
  • Content Count

    563
  • Joined

  • Last visited

Everything posted by bnevets27

  1. Ever since I started using unraid (2010, v4.7), the community is what has been the leader in adding enhancements/value/features to unraid. Initially unraid didn't do much more than being a NAS, which was exactly what it was designed to do. But the community was the ones who started to build plugins and guides on how to use them to add way more functionally then the core function of just a NAS. That's not to say limetech was lacking really. They did and do build a rock solid NAS and that was the motto for the longest time. But a lot of user, and definitely myself wanted to keep utilizing the hardware more and more, way beyond just a NAS. The community really thrived at that. I've been able to add much more functionality to my system, learn and use more and more programs thanks to unraid/limetech and the community. I've really enjoyed the community here, it has felt small and personable and while its growing it still has the small tight knit feeling. A place where people give great support and never makes anyone feel like they have asked a stupid question. Unraid is where I've learned a lot of things about linux dockers etc. Again thanks to the community. I'm sure I'm just stating the obvious here at this point. Not to say unraid is worthless without the community but it is what helps or maybe even the cause to why it thrives. On a somewhat selfish note, I have very much benefited from the work of all the community devs especially @CHBMB and @bass_rock with their work on the nvidia plugin. For the people that use it, it was a huge breath of fresh air for older systems running plex. And it is definitely not exclusive to older systems. I also used the DVB drivers that was built by them. Unraid/limetech really didn't seem interested, at the time in providing support for the nvidia drivers and @CHBMB and the @linuxserver.io team saw how much the "niche" (I honestly don't think it's that niche) of people really wanted and would really benefit from it. I was ecstatic when it as finally released. I'm not sure where the notion came that the "third party" kernel was unstable as @CHBMB and the @linuxserver.io team work really hard at releasing good dockers/plugins etc. With very good support and documentation. It's such a no brainer when going into CA Apps and seeing something maybe by @linuxserver.io, it's always what I'll install given the choice. I had mentioned this in the request thread for GPU drivers in the kernel to @limetech that having the ability for hardware acceleration in plex via nvidia cards would attract users. I'm sure it did and likely attracted users for other reasons once @CHBMB and @bass_rock got it working. Limetech didn't ever comment in that thread and from what I remember was not at all interested at the time in pursuing it, at least from what I could tell from public information. Yet @CHBMB and @bass_rock worked hard to make it happen and as I mentioned I'm sure that created revenue for limetech and I know made many people happy. I'm glad limetech has decided to build in the feature and support it at the company level. And really this should be a big win for both @limetech and @CHBMB. For limetech because it's a great added feature and for CHBMB to give him a break from having to support that plugin which was clearly a lot of work. Maybe even freeing up his time to work on other great things for the community. While I'm glad he is finally getting a break. I do hope he and any other developers that may have decided to leave come back, as I do really appreciate their work. I also hope that this situation actually ends up being more positive then negative. It looks like limetech as learned more now, then they I'm sure already knew, how valuable their community developers are. And hopefully more communication will help build a stronger better relationship with the community and its developers, because the combination of the limetech team and the community developers has created a ever evolving fastastice product. Thank you to both the limetech team, the community and all the community developers.
  2. True but the successful backup that it completed is unfortunately not a backup of a full working system. And in this case basically blank. I was lucky I did a manual for copy at the start so it wasn't a complete loss. I usually also keep (by renaming the folder) a permanent backup that CA backup/restore creates once in a while copy in case I don't catch something that's gone awry within the deletion period, so I can go further back if necessary. But yes with a delete backups every 60 days and say a minimum of 2 backups kept it would definitely keep a good backup long enough. In that same right, keep every nth backup indefinitely would be nice too but I can understand not wanting to put in too many options. Thanks squid for the great app!
  3. Short version: If the server has been off for longer then the number of days set in: "Delete backups if they are this many days old" and CA Appdata Backup / Restore runs a backup on a schedule, it will delete all the backups. Solution: Probably a good idea to have a setting for minimum number of backups. How I came about this issue. My cache got corrupt and my dockers stopped working. Since I didn't have time to fix it and didn't really need my server running when the containers weren't working I shut it off till I had time to work in it. While working on it I've left it on overnight. Came back to it today and my backups have been deleted. My setting are set to delete after 60 days and well my server was off for over 60 days.
  4. Don't have too powerful of a server but got some of my CPU doing work, along with my 1050Ti.
  5. ^ That's all about right. But it's basically up to cost/performance. For the most part, using up more slots the cost less and has the best performance . For example: 3 x IBM M1015 (LSI 2008 chipset) Max available speed per disk is 320MB/s. Cost is about say $90 ($30/each) But uses 3 slots. 2 x IBM M1015 (LSI 2008 chipset) + 2 Intel RAID SAS Expander RES2SV240 Max available speed per disk is 205MB/s Cost is about $300. $60 for 2 M1015 and $240 for 2 RES2SV240. Uses 2 slots, the RES2SV240 can be powered without using a slot. 2 x LSI 9207-8i chipset + 2 Intel RAID SAS Expander RES2SV240 Max available speed per disk is 275MB/s Cost is about $340. $100 for 2 LSI 9207-8i and $240 for 2 RES2SV240. Uses 2 slots, the RES2SV240 can be powered without using a slot. 1 x LSI 9300-16i + 2 RES2SV240 (not sure if this is possible) Max available speed per disk is 275MB/s Cost $540? ($300 for LSI 9300-16i and $240 for 2 RES2SV240) Uses 1 Slot. Or if you have a case with a SAS expander built in, that can save a lot of headache and trouble. Quickest and nicest wiring job also. For example if you get a 24 bay case with a built in expander. In single link with a SAS2 expander connected to a single IBM M1015 (LSI 2008 chipset). You have 125 MB/s max per drive (when all drives are being used simultaneously) You could also use 1 x IBM M1015 (LSI 2008 chipset) and the HP 6Gb (3Gb SATA) SAS Expander. Max available speed per disk is 95MB/s Cost is about $60 total. As far as lanes, this is the best post for explaining speeds on pcie bus. But the IBM M1015 (LSI 2008 chipset) uses PCIe gen2 x8. Which is equal to PCIe gen4 x2. So on x570 you'll only need 6 lanes to max out the cards/drives speed with 24 disks. And realistically, you only need x1 on PCIe gen4 and you'll have 185MB/s available per disk (if using 24 disks) which is only 3 lanes. So unless I'm missing how lanes are determined, lanes aren't really going to be an issue at all. It's going to be more likely a PCIe slot issue. Which in some way is related to lanes but not really. The issue you kind of run into is consumer boards tend to give you x16 slots and x1 slots. Sadly the x1 slots aren't really useful in a physical sense. Especially when it comes to HBA cards as they are for the most part x8 cards. In the server boards world x8 is much more of a common slot. You can easily use a x8 card in a x16 slot. But you'll run out of them fast, and you might need them. I think you could use adapters from x1 to x8 without a performance hit but not 100% sure. Really just depends on what the motherboard manufacturer decides. Would be easier if they just used x16 for every slot but I doubt that will happen It makes way more sense to get the right board for what you need but this was interesting: https://linustechtips.com/main/topic/1040947-pcie-bifurcation-4x4x4x4-from-an-x16-slot/
  6. Hard to pick one thing for both so two things that tie for number one. 1)ease of use (for the most part) much easier then it was years about. 2)unraids core feature, redundancy with any drive added at any time of any size/manufacture etc. Again a tie for what I would like to see in 2020. 1) easier/built in server to server backup and backup to cloud storage. 2)multiple cache pools (which I know is a planned feature)
  7. Just read the work that went into releasing the latest version. Thank you to all that are involved in keeping this amazing addition to unraid working. I've been enjoying the benefits of it ever since it was released and as we all know how great it is to have this ability. I really appreciate it. I searched back a bit and didn't see anything recently about the work being done on combining the Nvidia build and the DVB build. That would be the next dream come true. Is there any place I can follow the development of that build? Seems like it's getting lost in this thread.
  8. Thanks Johnnie. That makes sense but I had thought I had not see it shown like that (both disks as "new"/blue) the first time I did the parity swap/copy. From what I've gathered the copy operation happens then the array stops and then you have to bring it online to rebuild. So during the copy, the old parity (which in this case is now in slot 15) is writing to the new parity in the parity 2 slot. Disk 15 during this operation isn't getting written to. Now after the copy operation happens the array needs to be started, when the array is started that's when the data on disk 15 is overwritten. I know this is minor details but having the correct information in the GUI would help understand what is really happening. Started the parity copy now, and I won't be doing any changes until I start the array this time. Thanks to both of you.
  9. Ah ok that's exactly where I went wrong. Thanks. Maybe adding a warning to the wiki or GUI might not a be a bad idea. You make a good point and I was using the wrong terminology. Which I'm usually a stickler for so thank you for putting me straight. I edited my last post to reflect the proper wording. Since I know I'm starting over, I'm going to perform the parity swap/copy again. The server is currently in the above state waiting to start the process. But I want to confirm that seeing "All existing data on this device will be OVERWRITTEN when array is Started" on both disks is the expected behaviour. It really feels like it shouldn't be what is expected.
  10. That's it nothing else happened. But I haven't talked about rebuilding parity. Parity 2 was in bay 2 Disk 15 was in bay 19 (The replacement for the dead disk 15) Parity swap/copy was performed. All I had to do was hit start and unraid would built the data onto disk 15. But before I hit start I moved the disks Parity 2 (which after parity swap became disk 15) was moved to bay 15 Disk 15 (which after parity swap became Parity 2) was moved to bay 2 Unraid then marked both disk as new (blue square) and said "All existing data on this device will be OVERWRITTEN when array is Started" on both disks (parity 2 and disk 15) The only way to correct the above issue is to change the assignments back to before the parity swap happened. If I assign disk 15 back to parity 2 slot and unassign disk 15 from slot 15 then unraid reports "Configuration valid'. So I think the obvious process would be to basically start over. Start the array without disk 15 assigned to a slot. Stop the array, add the new disk and perform parity swap/copy. Then start the array as I should have done. Without moving anything. What I think happened is unraid saw the disks disappear when the drives were pulled out. For some reason it seems to have forgot the parity swap/operation happened. I suspect if it had powered down the server then did the move unraid would not have cared. But that's only a theory. Not sure if something weird just happened or this is a "bug" or a note should be added to the procedure. It would need to be replicated of course but it would be simple. Do a parity swap then unplug and replug while powered on and see if unraid forgets that the parity swap happened. So the note would just be to not make any changes until the array is started at least once. Well I decided to try and start over. Started the array with a valid config which like I said was going back to before the parity swap. Which is disk 15 back to parity 2 slot and disk 15 unassigned. With the array up everything was fine disk 15 was emulated and I have valid parity on both parity drives. Now I was going to start the parity swap again but I don't know if this looks right. I didn't think disk 15 had the "data will be overwritten" warning (disk 15 being the "old" parity 2) Please confirm this is expected on a parity swap and I'll proceed. But it looks wrong and like I would have data loss. (loss of data of disk 15) *Note, this is what I saw after moving the disks around and why I was worried to start the array. As the data on disk 15 (old parity 2) needs to be copied onto parity 2. If the data on disk 15 is overwritten then the parity info is lost. So this looks wrong. The ironic thing here to me is both disk 15 and parity 2 are currently identical as the parity copy operation finished. Assign either to parity 2 should be fine but unraid won't allow the new drive to be assigned.
  11. Yup, that's how I was aware of the drive failure. *** So I decided to do something I guess kind of dumb and now I'm not sure how to proceed again. I think I know but would like to be sure. I did the parity swap, and unraid copied the data from my old parity to my new parity with the array offline. That operation completed and then I was presented with a stopped array. I had valid parity and starting the array would start the rebuild of disk 15 (overwriting my old parity). That's all well and good. But I moved the 2 drives around the server physically, with the server on and array stopped (as I have hot swap bays). That pissed off unraid.....In hindsight I think if it did it with the server off it would have been fine and moving them physically shouldn't mater to unraid correct? So if I assign my old parity back to the parity slot and leave disk 15 unassigned its happy. It's back to before I started the parity swap and copy procedure. I then start the array, then stop the array then continue with the parity swap/copy procedure again. I would suspect this will work 100% as it will just copy the parity again (unnecessarily really). After the copy is complete, start the array and let it rebuild disk 15. But I think there is a way to assign the new parity drive and tell it to trust it. Then build disk 15. Essentially I know I have 2 parity disks that are the same due to the parity copy procedure. My new parity and old parity drives should be identical. But unraid wants to "All existing data on this device will be OVERWRITTEN when array is Started" on my new parity, even though it should be valid since its a clone of the old parity. I'm about 99% I can't use new config as I'll loose parity and the data on my failed disk. I was thinking more of a trust parity but I don't see that option. Option one seems a lot safer but a bit longer so I'll likely end up there but would like to know if option 2 really exists. *** Side note, I was just thinking it would be nice to have a history of "acknowledged" drive stats. Since you get a notification of say a reallocated sector, say 5. And while that's not great as long as it stays at a steady state then not a big concern. But then some time later you get a notification for 6. You acknowledge and move on. What I'm getting at is it's hard to keep track of if the drive health is slowly diminishing if you don't recall when you have acknowledged the errors. Sure there are other solutions but a history of the error and when you said "yeah I saw it" would be nice to have in the drive information page.
  12. Well I was referring to the GUI, going into the properties of the drive shows nothing, says disk needs to be spun up but clicking spin up does nothing. Anyhow, moved the drive just to see and after rebooting I could get the the attributes. This disk had Current pending sector 618 Offline uncorrectable 617 Not sure if I knew it was high before and acknowledged it and when on my way wondering how much longer it would last or it shot up quickly. But I'm pretty sure it was the former. Not surprised knowing the drives history. Parity swap running now, so far initializing it was easy, now its just time to wait. Thanks guy and thanks limetech/bonienl for making the was an easy task in the gui.
  13. It likely has. I do hate that you can't see the SMART attributes after a disk has been disabled. For the most part I would say it is unlikely a cable issue, as nothing has been touched internally in a long time. The case has hot swap bays. So far all the drives I have that have failed I have confirmed are bad by plugging them into a separate system and running preclear as a stress test, most just die part way through, other have the sector count increase etc. With one exception if my memory is right as one was disabled by unraid and I think tested ok. And I think it might have been from this slot so there is a chance something is going on there. So thanks for the tip/idea. I guess easiest thing to do would be move it to another slot. One reason is I don't like playing around inside and causing more problems, like knocking a/another cable off, when the array isn't healthy. And secondly the server is a bit remote at the moment and in a family members house who is doing the drive replacements for me and they won't be going inside. But hypothetically if the green drive is dead (which is still likely as I see in my notes it did have a few errors on it in the past), would the parity swap be the right path? Does it even work with dual parity?
  14. I am just looking for some advice on how to proceed. I had a drive I planned on using to upgrade my second parity but I have had a drive fail before I was able to put it. Current situation. 2 Parity drives, and 1 failed drive. unraid 6.6.6 (going to update to 6.7.2 when array is healthy again) Parity 1 is a WD gold Parity 2 is a Seagate Archive Spare is a WD Red/White label The failed drive is a WD green Where I want to end up: Parity 1 remains unchanged Parity 2 is the WD Red/White label The failed drive is replaced with the Seagate Archive (Which was previously Parity 2) Size wise I'm fine to move the drives where I want. I want to know the "safest" way to accomplish my goal which to me would be to maintain dual parity while swapping out the failed drive. I think what I want to do is Parity Swap/Swap Disable, as outlined here: https://wiki.unraid.net/The_parity_swap_procedure Would that be correct?
  15. Haven't used this, have no idea if it will work but this looks like what you are looking for. https://github.com/doctorpangloss/nut https://hub.docker.com/r/doctorpangloss/nut Found via: Let me know if you get it working and what settings were used in the template.
  16. This is likely going to sound negative and it's not intended to. How well the UI is layouted/works etc is very subjective. What you may find confusing may be easy for others to understand. On the flip side something you think would make it easier to understand might be confusing to others. I know for a couple things you've mentioned I would be a lot more confused if it was changed. One example is archived notification I don't see it too necessary as they will all be in the log which is always accessible on the top bar. You do have to keep in mind it this is a product designed to be installed on a user built system. So the expectation would be you can figure out how to build a system. So yes it requires work and reading. Choosing the right supported hardware is on the user. There is a lot of info in the forums covering most hardware. And things like flashing the lsi firmware is also covered well on the forum. It does take some investment of time, reading and understanding. And like has been said, some of that is kind of on purpose. One good example is thinking a reset array should be right there to click. That's a very bad idea, there is a reason it's somewhat hidden. You need to understand what that does and when to use it. Having it in the forefront, people will just click without thinking. The UI and some procedures could definitely use some improvements that's for sure. It's far from perfect but is leaps and bounds better then it was. After getting comfortable with unraid some of the UI decisions might make a bit more sense. For example disk info in the shares. I don't see much point in that. While I do organize my data based on disks like you sound you want to do also. I would argue you would have more info visable and it might even be easier to have two windows open so you can identify which disk is which. Now you might notice I have been here for a bit and you are looking at this as a new user but I don't come from an IT background so I do look at this as a more normal user. Additionally I help a few family and friends with unraid who are much less technically inclined so I do see what and where they have trouble also. Setting it all up initially, that's the most work/learning curve and likely the most annoyance. Hardware trouble doesn't help either. But thankfully after getting it up and running you generally don't play with the UI much more as unraid will just run. Sometimes when something does go wrong or you have to do some disk maintenance you will forget prodecures due to how much time has past. But thankfully the answers are for the most part always here or someone will help. I would have given made a better post with replies but I'm on mobile.
  17. Bought a few licenses ove the years. I'll probably buy a sheet of stickers. I've never bought or put stickers on anything I own so limetech should be proud. Build link that needs some updates now that I see it again. https://forums.unraid.net/topic/29388-excelsior-plugins-dockers-and-vms-oh-my/ Built a bunch of others but no build threads for them. (six currently under my command) One other I built that I enjoyed making: https://forums.unraid.net/topic/12951-sold-smallcompact-server-hot-swap-supermicro-chenbro-dual-core/ Been very happy with unraid more and more as it improves. Have convinced and suggested it to many friends and family.
  18. Confirmed. This works very well. Mouse and keyboard work perfectly. Internet works well. There is an option to trick apps into thinking android is on wifi, which is sometimes needed. I had some issues with it running very slow and crashing. This was due to memory allocation. I think I had initial set to 1GB and max set to 2GB. Changed both to 4GB and it runs nice and fast now. I couldn't get it to boot correctly on its own under OVMF bios. I could get it to boot with the install iso loaded, I had to select "upgrade" from the install iso menu. Reinstalled with seaBIOS and it boot fine. So relevant info from my install that may help others: Initial and max memory set both to the same, something higher than 1GB, mine is set to 4GB Machine: Q35-3.0 (though i440fx-3.0 also worked) BIOS: SeaBIOS vDisk bus: SATA Would like to find a way to access a user share or mount some storage outside the VM image file. Thanks for the update @kimocal, neat to have this finally working well!
  19. Often on forums, you can't search with less than 3 or 4 characters. That might be the case with using api as a search term.
  20. Not sure why this is "of course they aren't" I think the majority of users here leave their unraid server on 24/7. I have a setup at one location that uses raspberry PI's for all of the clients (running kodi) and unraid running as the server. Since the server is in charge of recording live TV it's on all the time. And the minor power draw the PI's use I don't bother to sleep/turn them off. In a different setup/location I was using a more power hungry client(s) and would sleep them. So the syncing was an issue for me at that time. Glad you might have found a happy solution with emby though.
  21. Might be a moot point if you leave your clients on all the time. I've found the only real way to decide between different solutions/software is to try them. Everyone has different preferences and setups. And different combinations work better for different people. If emby is easy to setup. Try it out. If nothing about it bothers you then stick with it.
  22. Don't worry it's coming. It's already working but there are bugs being worked out before it's released. And you can still pass the card though to a VM.
  23. My reason for mentioning Unassigned Devices is to mount the drive with the backup, copy it onto the current usb and reboot. That way you don't have to start from a fresh install, all your disk assignments and everything will be back. Sure you could just mount the drive from command line but its the easier GUI solution.
  24. No idea about Handbrake but I would venture to guess it's in the same boat. So no, no card can in a docker ATM. The drivers need to be built into unRAID See: Ability to install GPU Drivers for Hardware Acceleration.