Jump to content

shEiD

Members
  • Posts

    119
  • Joined

  • Last visited

Everything posted by shEiD

  1. Some settings simply do not "stick". Every single time I restart the rutorrent container - I need to remove the global upload and download limits and what is the most annoying - I need to disable DHT again, every time (I do not use public trackers). This actually makes it impossible for me to set rutorrent to autotsart - because I will forget to actually open it and re-set these options again. How can I make rutorrent to keep the all the settings changes?
  2. @Squid Thanks for our answers! And actually you helped me solve my problem, without actually giving me a solution in your post up top This was the problem and the solution: Note: NEVER use /mnt/user/appdata/APP_NAME as the host path for the /config files. I found the possible solution in your FAQ, did it and it actually solved most of my problems - 15 minutes to stop the container and the missing nzbget icon. I did set up my docker, by watching the @gridrunner videos. And he actually missed/forgot this step about setting appdata not on user share but on cache share in his docker tutorial video. Actually, he forgot it during the guide on the machine he was doing the setup on. Later on in the video he shows his own "proper" server, and when you look carefully, you can see in his docker tab, that all his docker containers are actually set to cache share, not user share He just forgot it in the "tutorial machine", while making the video The only thing still not fixed is the blank page instead of webUI in qBittorrent. But, yes I will write about thet in their support thread. Although, I was thinking, maybe I should delete and reinstall qbittorrent container, maybe it got messed up on install before, because the config path still set pointing to user share instead of cache share? But about removing the container - when I choose the Remove option in the context menu, it shows the modal popup with an check-box: also remove image. What does that mean? What image? The whole docker image, that holds ALL containers? I will think about upgrading to 6.4, but tbh, it's still in RC, and as I am completely new to unraid, I feel safer running the stable version, at least for now. I need to get used to unraid, get some experience, then I can get into RC and betas
  3. @TinkerToyTech Heh, I have all SpaceInvader One videos downloaded and on my hard drive already. He goes by @gridrunner here on the forums. The videos are amazing help, and I have setup my unraid and everything in it by those videos. I am gonna try both rutorrent and qbittorrent. I like qbittorent GUI the best of all torrent clients, actually. And I have tried pretty much all of them. The problem is, that qbittorent's webUI is way worse than the regular desktop GUI. Hence I'm trying out rutorrent. But I do not particualrly like it, to be honest. It's a lot of small things, that I do not like, but that's beside the point here. I really don't want this to get of topic. Because I am still waiting for some answers/tips how to fix my docker side of things on my unraid server. I mean, I suppose it is pretty much unusable now. Because, like i already said - the last new docker image that I have installed (qbittorrent) does not even show the webUI. And stopping any running container now takes many tries and more than 15 minutes. Which is f***ing ridiculous. I cannot continue with my dockers setup like this. So I'm waiting for my preclears to finish and then I guess I'm trying to delete docker and restart from scratch. I hope I will not need to redo the whole unraid OS from scratch. But tbh, this actually worries me, not knowing what the hell is wrong, and why this crap has happened. I know I'm new to this, but not a complete idiot. And, actually as you suggested, I did everything by watching the videos. I mean my unraid setup is brand new, there's nothing fancy or exotic on it. I haven't even started to actually properly use it. The pool still has no parity, as I still not finished transferring all my data into the unraid. Basically, if this is an accurate "picture" how unstable and buggy the whole thing is - it really worries me. I had way less problems with a shitty windows 8.1 setup before. I mean NAS should be set it ans forget it. In my case - I haven;t even finished setting it... and already all kinds of problems. And in the meantime my family is hounding me, because they wanna watch their stuff - and they can't because the server is not "working" yet. At this point I'm actually at least looking for an answer to - is it considered "safe" (as in it will not affect anything else in "unraid's working condition") if I use that guide to simply reset the whole docker by disabling docker, deleting and recreating the image and then reinstalling the containers? And am I correct in thinking, that by doing this, I basically would do a fresh start on anything docker related? As in, if there was a problem with something-docker, it would get resolved? Basically, what I'm asking - can I think of it as doing a fresh windows reinstall, when using windows? I mean, I'm a windows guy, so this "analogy" would be quite understandable to me...
  4. Just an FYI: I am new to all - unraid, docker and linux overall. From the very beginning I was surprised, that starting the docker container from the unraid webUI was pretty much instant - like a second and the webUI refreshes and it's ON. But stopping always took way longer... I have only a couple of docker containers installed. Yesterday it happened for the first time - I tried to Stop the container from unraid webUI (Docker > container context menu > Stop). My unraid webUI went unresponsive for a very long time - more than 10 minutes probably, until finally it refreshed and the container was stopped. That was nerve-wrecking. I really thought my whole unraid UI went dead. Today was the same, but even worse. Whenever I tried to stop the container, the webUI went "spinning" for the longest time, but now, it actually refreshed and showed an error modal popup with something like "Execution Error" I am really not sure, because that error always flashed and closed extremely fast - maybe 1/10 of a second at most. And the container, which I was trying to stop - was still running! I sometimes tried to stop the container 2 or 3 times, until it actually stopped. And finally nzbget container even lost it's image (the icon). What the hell is going on? How do I fix this? Should I reset the whole docker thing? As in delete/recreate docker image file and reinstall the containers? As per this guide: Is it actually normal for the docker to take such a long time to shutdown? And in general, to take way longer to stop, than to start? It feels counter-intuitive. What is the official/best way to Start/Stop docker containers? I assume the context menu in unraid's webUI? Why are there multiple versions of containers in the "Apps Store"? I mean, eg: Sonarr from linuxserver.io, binhex, PhAzE and hurricane. I mean what's the point? Are some better than others (for using in unraid)? I just installed qBittorrent container (linuxserver/qbittorrent:latest) and the webUI shows a blank page. What the hell? The first thing I would normally do, is restart the unraid and see if that helps, but I can't and won't be ble to for quite some time - I am running preclear script on 3 drives (6TB). Help, please. I am almost finished migrating all my files from my old windows server into new unRAID server, and I am in the process of setting up the downloading stuff: nzbget, rutorrent or qbittorrent, sonarr and radarr. I need to set this all up as fast as possible, because my family is getting restless without their shows and movies Thanks in advance for any help. Edit: I finnaly managed to get a sceenshot of that error windows, when trying to cstop rutorrent container (still runing, after 3 tries). The error is "very informative"
  5. @itimpi Thank you for the answers. It is rather strange, that such an important part as mover script, which actually, imho, is the most important aspect that makes the use of cache drive useful - that it does not have an ability to ignore folders/shares. I mean, you are forced to use user shares either in cache=No or cache=Only, with a "dumb" mover script like that. I am new to unraid, so maybe I haven't discovered a better way to achieve this. But as of now - it seems anyone using unraid and using torrents/usenet on it, would be in same situation as me. What is more baffling, is that mover cannot be easily disabled via a simple setting in a webUI. I seriously do not understand the logic of this decision.
  6. I want to have rutorrent and nzbget download folders be on the cache drive, because it's an SSD. Now nzbget - this is not a problem, I can set it to cache-Only. Or I could even probably use an additional SSD with unnasigned disks plugin. I was just now in the process of setting up nzbget, sonarr and everything else and I realized, that: actually, all of them - nzbget, rutorrent, sonarr, radar and anything else of this kind, needs to be pointed to the exact same user share (eg: /mnt/user/downloads), for them to work perfectly. That means that even nzbget downloads will be on the user share with a setting Use cache disk: Yes. And that is very inconvenient. The rutorrent folder - I need it to be an actual user share with the setting: Use cache disk: Yes. This way rutorrent will download everything into the SSD cache drive, and then I can move those files manually to the pool and keep seeding. I use user shares and split levels to keep my files in a particular order on the data drives. For example - I have movies and tv shows on separate "drive groups", by using separate user shares and Included disk(s) option. Also, I use split levels to keep all of the files belonging to a particular tv show together on the same disk. The user share used by rutorrent is obviously using all the dives in the pool - because I want to seed all sorts of files, from any of the data drives. In this scenario - if the mover moves the files from rutorrent folder - it has no idea if it's a movie, or an episode, etc, so it will pick any drive from the pool. And that is no help at all, because I will need to actually find those files and move them to a proper disk myself, just now I need to go hunting for them among 28 drives, instead of them all being on a single cache drive. Basically, it's simple torrenting stuff. So can mover somehow be set to ignore some particular user share's files on teh cache drive, and leave them alone? If not, can mover script be replaced by a "smarter" mover by using a plugin? Finally, can mover be disabled completely, if want to manage files on the cache drive and the user shares using cache manually? I tried to search for info on this on the forums, but could not find anything recent. Judging form the old discussions, the answers to all 3 would be NO. Which is hard to believe. I hope I misunderstood something or something has changed over last years.
  7. Thanks for replies, guys. @ken-ji Thanks for info. Not a linux guy - me, so sorry for maybe misunderstanding some things. @bjp999 Very good points. And I actually agree with most of them. Thanks for actually explaining the math and percentages. It is really quite hard to do it as a new unraid user. Actually, the seconds parity and how the hell it works, and helps to recover - was exactly the point I was and still am not sure 100%. I actually assumed, that the second parity would provide the info, which hard drive was "bad". I guess I was wrong. But then I seriously have no clue, what the hell that second parity does. Of course, this problem could be sorted out by implementing a proper checksums systems. And talking a bout checksums - I have read someplace, that one of the check-summing scripts/plugins has the ability to actually help to achieve this, by providing the information which file has been corrupted, during the repair... is that true? I tried to look through my notes, but can;t find the link, where I read this. The extra 0.26% protection for the price of additional drive sounds really silly, when you put it this way. But I have 2 drives on me in a 1-2 days period at least 3 times over the years. So even from my own experience - this is not that uncommon, especially with a shitload of drives. and I have "some" What worries me more, is if I can't successfully recover from failure 100% - that means that some file(s) got broken. If unraid has no idea what file(s) or on what drive(s), and it sounds like that's the case even with double parity - that means my paranoia and OCD will kill me Or am I misunderstanding how double parity and recovery works again? In this case, I assume, I could at least probably use the hashes made by the Dynamix File Integrity plugin to find the broken files? As for multiple pools - I stand by my opinion - it is essential, imho. Multiple pools would enable to use smaller "protection-groups". For that I would gladly sacrifice more drives. Let's say 1 parity for every 16 or even 10 drives. Multiple pools would greatly speed up parity checks, as I understand. That's just of the top of my head at the moment. As for pooling the pools - why not use the same system? You have multiple separate user shares now. You can setup every share individually - set the included/excluded drives. Nothing has to change when it comes to user shares. The only thing changes is setting up separate parity-pools. I mean, you could set one parity to protect drives 1-16, another would protect 17-32, and so on... That's it. I assume the parity drives and the whole protection scheme has nothing to do with user shares implementation in the current system also... The only changes would be, if they would implement multiple cache pools, which would be awesome. One could be protected (RAID1), another could be simple JOBD... Would be awesome. And the mover script would need to be made a little bit smarter and with options. Also, I'm not whining about moving to Freenas. I know you won't like it me saying it, but it is a fact - unraid and freenas are in different leagues. It's not unraids fault or an accomplishment of Freenas, as it were. It's ZFS - it simply has no equals. Overall, unraid has better usability when it comes anything other than FS - docker, VMs - everything is easier, imho. So don't go biting my head of, now. If/when I want to move to freenas, I'll simply do it, no whining required
  8. Thanks you for responses, guys. @bjp999 Me thinking, that the profit is the reason of not having multiple pools, is simply speculation. But what makes me think this way, is that I have not seen any good explanation for the limit on the drives, especially on the most expensive Pro version. Actually, unraid has pricing tied to exactly that - the drive limit. If profits are not the reason - remove the limit on the Pro licence. What's with this 30 drives limit? Why 30, exactly? If unraid can successfully protect 5 or 10 or 28 drives with the same double parity, why not more? What's the difference, anyway, 28 or 48? You are probably hitting I/O bandwidth limits anyway on parity checks. The $129 is not exactly cheap. Especially if you need to buy more than one licence. And even though I have been using unraid only for a couple of weeks, I have been checking the forums for more than 10 years, probably. And I know there are tons of people having bought multiple licences and running multiple unraid boxes. There was even a 2 licence bundle before, iirc, no? I'm pretty sure there was, maybe a long time ago. Why isn't forums a good place to talk about features and feedback? Methinks it a good place to toss some ideas around, especially before contacting the company with half baked requests. @BRiT I know it would take some work. IIRC, I have seen that emHttpd blamed for many things, as being the reason too hard to change to implement this or that... I may be wrong, but methinks, the biggest reason of unraid being hard to change and evolve is - that it's a paid software, closed source and it's using a custom distro, as I understand. @Lev @dgtlman Actually, I am not willing to pay for multiple licences. I especially would not be happy to pay for multiple licences to be used on a single machine, in whichever way it would work - licence per pool, or whatever. Like I said, imho, $129 is expensive enough. Actually, for that money I would actually like to be able to use that same licence on multiple machines. For personal use, that is, for businesses the licence could be different. But that's the point - if Lime-tech would bring out unraid "update" unraid to bring it out from home-hobbyist level, it would be more attractive for business use and the licensing and money would be different. I know I may get booed for saying this, but for business use, imho - it would be silly to use unraid, when there is freenas out there, which is way more secure, faster and most of all - free. But for home use, yes unraid is acceptable, if you have small server with not a lot of drives. Anyway, my previous post was just a frustration. I am in the middle of migrating my windows server to unraid (100TB+ of data). I chose unraid because I already had a license, and it was cheaper (for now) to buy 3 new 10TB drives to replace 3 smaller 3TB drives (which are in perfect condition, btw), just to be able to "fit" my data into that 28 data drives limit on unraid. So, when I saw the exact topic on 30 drives limit, and then read the nonsensical answers, like - more drives more problems, and maybe you don't need that many drives... My reaction simply was - what the hell? That's it. The problem is basically, I chose unraid, even though I already have a server, that I could connect 80 drives to easily, today. I chose unraid, because it has a pretty simple webUI, which is a must for me - because I have no experience with linux, whatsoever. If I had, I would probably do this: The Perfect Media Server 2017 And that's the point I was trying to make before. I may be wrong, being a linux newb, but it seems to me, that all the software needed to make a perfect (or way better) unraid is out there already, and all of it is free and open source. Like I said, all you need is: some good linux distro btrfs for cache pool(s) snapraid for parity mergerfs for easy and flexible pooling a smarter mover script using cron, with some options for multiple cache and data pools and to able to ignore folders, etc docker - no problem, and control with something like Portainer is very nice KVM - no problem write some webUI to manage all this, if you want, but is that necessary? You could easily run this on some linux distro with a Desktop. that's it, isn't it? I am not a programmer, just a hobbyist. And I've got no linux experience. So that's a no-go for me, for now. But that solution sounds pretty doable to me. And awesome. And free. I bet it's gonna be made by someone, and pretty soon...
  9. This is a sore subject for me too, actually. The fact that unraid has a limit of drives - is the most annoying thing about it. The main purpose of unraid is to be a storage solution. I know that with VMs and docker support it has evolved to being used not only as storage solution, but that does not change the main purpose of it. And most importantly - unraid is not free, but rather expensive. And for a storage solution software to have a limit on drives in itself (as in - the limit is set in the software itself, and not in hardware limitations) is, IMHO the most ridiculous thing about unraid. And to put it bluntly - it's like a TV manufacturer making and selling TVs that only show 30 channels and no more. And if you happen to have a cable subscription with a hundred channels, you need to buy 4 separate TVs to see all the channels. IMHO, unraid should have supported multiple pools years ago. Multiple pools and no drive limit. And I would actually love to see a proper answer to this question - why doesn't it? Because honestly, the only reason I can see is the money, as in people buying more than one license, hence more money for lime-tech. Multiple pools would solve the problem of drive limits, bandwidth limit problem with a lot of drives when doing parity build/checks, and the problem of actually having 28 drives protected with only 2 parity drives. And I'm sorry guys, but your answers are not very helpful. What would be the difference between 28 and 38 or 48 drives actually? I mean 28 is already too much with only 2 parity. I mean more than 28 with dual parity is way better than 20 with a single parity, as it was a case for years before. And yes, he probably needs that many, seeing that he bought a case that supports 36 drives... More drives means more problems is the same as running unraid is more problems than not running unraid at all. I agree, as it stands now, unraid is only a hobbyist solution at best, and why lime-tech is OK with it I have no idea. I think having a lot of drives even in a home server has already become an enough common-place deal to not be enterprise-only situation. I think it's been years already, with many people having huge home media servers (storage-wise). I am a 100% hobbyist and I have 200TB+ and it's not that hard or expensive as it was before. It's actually pretty easy and cheap nowdays. The cost of having more than 12 drives in one system is actually very cheap. Actually, nowdays you can easily connect more than a hundred drives to a single machine, by using HBA(s) and used external SAS expander boxes from ebay. And it is many times cheaper than building multiple machines + buying multiple unraid Pro licences to run 30 drives at time. That is the situation today, actually. I have bought an unraid licence, as an impulse buy, almost 2 years ago and haven't used it until now for mostly a single reason - the drive limit was too low. I can connect 80 drives to my single home server with my current setup. And 30 is way less than 80. The only reason I actually picked unraid over freenas for now, is because I already had a Pro license. Actually it would be interesting to calculate what would be cheaper to run if you have 50+ or more drives - freenas or unraid. This is just a wishful thinking, but IMHO, what lime-tech should do is: Keep the real time protection using a cache pool, I'm not sure if btrfs is stable enough, though, but it has checksums, and that's a must. Maybe even add an ability to have multiple cache pools with separate raid options. Implement multiple pools. No drive limit. Period. Ditch this proprietary real time 2 parity drives nonsense. Use Snapraid on those multiple data drive pools. Keep the convenience of user shares, of course (the pooling). Result: Way better (if not the best) protection at whatever level user wants. You still get the real time protection using cache pool(s). Mover moves the files from cache during the night hours, and updates Snapraid parity. Copy to the pool(s), update parity, delete from cache... easy. You still have the convenience of adding/removing any type of drives whenever you want. You get the everyday usage speed with cache pools. You avoid I/O bottlenecks during parity checks with smaller Snapraid "protection-pools". Unlimited drives, as it actually should be, in a storage software solution. If/when someone makes a user-friendly solution as described above - unraid will loose customers, imho. Because that one is the best solution for home servers. For anyone that would want more - there's Freenas.
  10. I am a total linux newb - no experience with linux at all. Also, it's been about only 3 weeks, since I started using unRAID. I am in the process of migrating files form my old windows server to unRAID atm. 1) I have installed a Linux Mint vm on unraid, and I always get this warning, after booting up the vm: Running in software rendering mode. I guess it's got a problem with GPU drivers? So how do I install the virtual gpu drivers? or is there something else needs to be done? As it is now - the vm is practically unusable. I have attached the screenshots of the warning and my Linux Mint vm settings in unRAID. I would really like to be able to pass-through the gpu to a LibreELEC vm and use it as a Kodi player. Down below are my IOMMU details. It seems my GPU is not alone in the group. These are my PCI Devices and IOMMU Groups: IOMMU group 0 [8086:0158] 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/Ivy Bridge DRAM Controller (rev 09) IOMMU group 1 [8086:0151] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09) [1000:0064] 01:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02) IOMMU group 2 [8086:015d] 00:06.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09) [10de:104a] 02:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1) [10de:0e08] 02:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1) IOMMU group 3 [8086:1502] 00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 05) IOMMU group 4 [8086:1c2d] 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) IOMMU group 5 [8086:1c10] 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) IOMMU group 6 [8086:1c18] 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) IOMMU group 7 [8086:1c26] 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) IOMMU group 8 [8086:244e] 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) [102b:0532] 05:03.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200eW WPCM450 (rev 0a) IOMMU group 9 [8086:1c54] 00:1f.0 ISA bridge: Intel Corporation C204 Chipset Family LPC Controller (rev 05) [8086:1c02] 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family SATA AHCI Controller (rev 05) [8086:1c22] 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) IOMMU group 10 [1b21:1042] 03:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller IOMMU group 11 [8086:10d3] 04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection 2) Would changing a PCIe slot on the mobo, that the GPU is plugged in have any effect on IOMMU grouping? I think that's a USB3 card in group 10, and it's alone. Would putting a gpu in that PCIe solve my issue? 3) If not, should I use the Enable PCIe ACS Override option? I am worried that it says right there in the info: Warning: Use of this setting could cause possible data corruption with certain hardware configurations. Thanks in advance for any help.
  11. @BobPhoenix Sorry, but mount points are completely different thing, as I understand. After some googling, I think in linux symlinks are the same as junctions or softlinks (which are better) on windows. Hardlinks are pretty much the same everywhere, I gather. Now the question: Can I make symlinks using krusader (in docker container)? I am complete newb as it comes to unraid and linux overall... so sorry if it's a silly question. Can I make symlinks only in the same drive (as hardlinks only work on the same drive) or can I make symlinks in the user shares or across user shares? Can I make hardlinks using krusader? Basically, both symlinks and hardlinks are indispensable, when it comes to torrents and seeding. As I mentioned, I'm new to unraid and linux, basically - a windows user through and through - so my command line and ssh skills are abysmal... And krusader is pretty cool and convenient way to manage files via GUI.
  12. I am(was) using hardlinks and junction extensively on my windows server. Now that I'm migrating to unRAID, the question is does unRAID support hardlinks and junctions? If I understand correctly, hardlinks are properly supported on the "same disk". But junctions - I can't find anything about them. And if they are supported - what is the easiest way to create hardlinks and junctions?
  13. Cool, ordered 3 (just in case 1 is DOA). @johnnie.black Will these be easy to setup and use on unRAID and windows? Any guides? I have no experience with cards like these...
  14. Where did you buy these 8TB Reds so cheap? I'll buy some on the spot, if they're still available at this price. Do you mean these? MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD W/CABLE | eBay - these are suspiciously cheap And cable included... Could I just direct connect my 2 servers with 2 of these cards? I mean without a 10gig switch, because I don;t have one...
  15. Phew... I was beginning to worry Thank You guys for making this clear. So, even if you already have parity - in an emergency, you can simply "disband" the array, take the drive(s) with data and re-create the array and parity from scratch... Am I understanding this correctly? DrivePool is nothing special. The drives are simple NTFS drives with a hidden folder in the root, where all the files and folders get placed. Basically, it is very similar to how user and disk shares operate in unraid. With DrivePool you can access your files pooled over the pool drive (which get's created and presented to the windows as a regular drive), same as user shares on unraid. Or you can access the drives directly via their mount points (eg: I have all the regular letter mount points removed from all my pooled drives, and instead I have all of them mounted to folders in D:\Drives\...), same as disk shares in unraid. Basically - I can remove DrivePool completely, and still easily access all my files on the drives without it. It's just that all of them will be in hidden folder in the root of each drive. Simply show hidden - move everything out to root - done - a regular NTFS drive. So, there shouldn't be any kind of problems from DrivePool. If I passthough the HBA - all the drives should show up in windows as regular drives and DrivePool simply picks them up - no problem. Actually, DrivePool would be quite good and adequate, if not for 2 stupid mistakes: The stupidest thing ever - DrivePool scatters the files all over the drives by default - with pretty much no real way to keep them together, as in unraid. And the dev is ignoring any pleas for implementing an option to keep the files in a folder together. The numerous explanations for this that I have read over the years are ridiculous. The second problem is that the only type of protection is duplication. Which is stupid for media. And because of the reason #1 - you effectively can't even use third party parity solutions, like eg: snapraid. Because you never control the file placement - hence you basically will be out of sync all the time. There's a shitload of other problems with DrivePool and pretty bad too, but that is irrelevant. Exactly the reason I want to run the windows in a VM on the unraid I'm pretty sure I could get 800MB/s per HBA or even more, maybe. Actually, with 4 expanders I can put 64 drives online just using them (16 drives per expander). I can connect all 4 expanders to a single HBA. Or, as I was gonna do - 2 HBAs with 2 expanders each. 32 drives per expander. As unraid only supports 30 drive pool, I'm covered. I even have 2 extra slots for cache drives later on. Theoretically (as I'm new to unraid, hence no practical experience) there should not be a problem... I clear the drives in windows, remove them from the DrivePool Shutdown the windows VM and power down the server (unraid) itself Remove the drives from expander connected to the HBA, that is passthrough to the windows VM Add those drives to the expander connected to the other HBA, which is used by unraid Power on server (unraid) The new drives are ready to be precleared Power on windows VM - those removed drives are not even missed... And continue moving files onto the newly precleared drives on unraid... Again - removing and adding drives in my case is simply sliding out the HDD trays from one expander and sliding them into another. No potential loose connection/cable problems, whatsoever. Yes, that is a pickle with DrivePool Anyway, DrivePool has a balancer (the whole DrivePool works on "balancers") which fills the drives one by one. That is one of the "answers" they give, when asked how you can keep files together. This balancer dumbly fills the drives one by one in a row, instead of scattering files. Which may keep the files together, while you initially filling the drives. But later on - during normal usage - deleting files, adding new ones - the files still dumbly go to whatever drive needs to be filled now. Hence - no real keeping files together, no matter how much they want people to think that it is. In my situation - I simply move the 3 drives worth of files from the DrivePool, and yes they come from all over the place - so no drives get nicely emptied. But then I use that balancer in the background to fill some drives back up, in the process almost clearing the others (the next batch to be removed). At the end, I just use another balancer to actually clear the drives completely. And thats the whole convoluted mess DrivePool has gotten me into Oh, and btw - that singular balancer which fills the drives one by one, by disabling the most important default behavior of DrivePool - the scattering of the files all over the place - it is not even available by default. You need to install it separately, and it is not recommended TBH, the whole DrivePool business would be funny, if it wasn't so sad... Wow! That sounds extremely interesting. is this what you're talking about: [Plug-In] unBALANCE? In the description it says: Gather (Coming Soon ... ) Consolidate data from a user share into a single disk So it has Scatter implemented, but Gather - no? That would be very ironic I hope not, gonna read though it after posting this... I was looking for something - anything to do this kind of thing. I even started writing my own program, but dropped it, because I figured it'll will not save me more time during migration, that I would waste writing it. Please, from your experience with unraid, what would you recommend? I am thinking of only 3 options: WD Red 10TB - 5400 rpm, hence running cooler. Exactly the same as the data drives, and probably my favorite option WD Red Pro 10TB - 7200 rpm, this would be the most expensive option, but probably the most reliable? WD Gold 10TB - 7200rpm, more reliable than regular Reds?
  16. Running preclear on the drives before adding them to the pool, makes adding them to the pool way faster - almost instant. If you have not precleared the drives, then when you add them to the pool - they get cleared then and it takes hours. Or am I wrong? If you want to preclear faster (using the preclear plugin), you can disable pre and post reads, iirc. Also, methinks it's a really good practice to run a preclear plugin on the drives before adding them to the pool. It tests the drives.
  17. Thank you guys very much for the answers, but I think you misunderstood my situation and questions (mainly the most important -1st one). I was really trying to be precise, but probably I wasn't. Let me describe it in more detail: I have no worries about my hardware. Pretty much all of it is enterprise grade and I bought all of it brand new (except the SAS expander cases). The Supermicro mobo and LSI HBAs - these are very reliable and should be supported very well, AFAIK. The SAS expander cases, even though I bought them old and used on ebay - they are also enterprise grade, not some Norco cases. Actually I bought them to replace the Norco case I was using before, and these are way better - only half depth and way better airflow and cooling for the drives. All my drives are in these expanders. And these expanders are connected to the HBA via a single external mini-SAS cable (SFF-8088 to SFF-8088). So basically, whenever I need to add or remove a drive - there is no fiddling with any wires or connections. I do not need to open any cases - neither server nor expanders - I simply remove/insert the tray in the expander and that's it. Also, I never do or trust hot-swapping - I always shutdown the server and that particular expander, to which I am adding or removing drive(s). I'm saying this just to be clear - I am not expecting any "loose or bad connection" troubles during the migration process. And during regular use down the line, probably too. Currently I have 2 servers. Migration source and destination. The source - server #1 is a windows 8.1 machine. This is my current home media server (it runs on the hardware I described). I guess you could call it production server (as in it is not a test machine just to play around - it has all my media on it). yes it has 130TB of data, but 98%+ of it is just media (movies and TV shows). So I do not have backup, but I do not need backups. I am not rich enough to have backups of media files They are pretty easy to replace. The 2% - the important files - they have backups, of course. The destination - server #2 is now running the unRAID 6.3.5 Trial. The singular purpose for this server is to be the destination for files from the windows server (#1) during the migration process. For this reason, this unRAID server does NOT have parity, nor any parity or cache drives. And it will NEVER have parity or cache drives, ever. After I have migrated/moved all my files from the server #1 (windows) to this server #2 (unraid), I will discard server #2. And then I will install a fresh unRAID server on the server #1 (windows be gone). Basically: I will move the files from server #1 to server #2, until I clear 3 drives in server #1. I will then remove those 3 empty drives from server #1 and add them to server #2. I will run preclear plugin on those 3 drives. I use preclear plugin for 2 reasons: 1) it tests the drives and 2) when drives are precleared, adding them to the pool is very fast. Then, after they have been precleared, I will add those 3 drives to the existing unRAID pool (again - the pool has no parity). And repeat this loop with the next 3 drives, until all NTFS drives from windows are in the unRAID. Again - when I move files to unRAID (server #2) - I must be moving files from a windows OS. I cannot simply remove a couple of drives from my windows server and attach them to the unraid server and mount them with something like Unassigned Devices. Like I described before - all my files have been scattered between all the drives by the stupid Stablebit DrivePool. So now, that I want all my files back together in proper file structure on unRAID - I have to be moving them from windows OS, running DrivePool. This and only this enables me to move the folders with the files inside them together. Basically - when I move a folder from DrivePool to unRAID, then DrivePool picks up all the files that are scattered around in multiple drives and combines them. I hope I am making myself clear. And yes - I know unraid supports only 28 data drives in the pool. I will have only 28 data drives. I have described my future unraid pool in my first post. Actually, fingers crossed for unraid supporting multiple pools in the future I will be using dual parity, of course. So, the questions in more detail: Can I make a brand new pool in a brand new, fresh install of unRAID with the drives, that already have data on them? Once again - brand new pool - still no parity. Can I add a new drive with data already on it to the existing pool on unRAID. That existing pool does not have parity yet. I repeat - no parity or parity drives. Basically - when unRAID adds a new drive to the pool - can that drive have data on it or not? I mean - if you add a drive to the unraid pool does unraid always clear and re-format the drive? I hope you can, because other-wise what happens, if you want to simply reinstall unraid from scratch... For example - the unraid flash drive dies... I hope you can use the existing data drives from last unraid pool to create a new one, without moving/copying the data over. Because other-wise it's silly... As for converting my current windows server into a VM running on unraid. I was reading the forums and watching the awesome unraid videos from Spaceinvader One - YouTube. So, basically, I should be able to do it. I was only wondering, if there was any trouble I don't know about, being an unraid newbie, when actually moving files from a windows VM into the unraid pool. If there isn't, it should be good, right? Also, these things you guys said worries me a little. I cherry-picked only the sentences in question: This sounds like any drive added to unraid pool always looses any data already present on it. Even, a data drive from another unraid server? Seriously? I hope not. Maybe I'm wrong, but I thought that, during the parity check, all the drives need to be read? I understand, how the size of the parity drive (and the largest data drive(s) on the pool, probably) would dictate how long parity check would take. But I'm thinking this should be true only if you have a small enough number of drives, so that there is no I/O bottleneck. Hence my question - with 30 total drives - there will definitely be an I/O bottleneck... Not sure how big, of course. but other-wise, isn't that the case that parity check depends on your I/O bottleneck and the pool size (both in drive count and drive size)? I assume, that is what bjp999 means by "not constraining the bandwidth", yes? I was basically thinking, that parity drives do the most of work - they get written to whenever there's anything getting written to any of the data drives... Hence, I assume they need to be more enduring and reliable. And, of course, fast - at least not slower than any data drives, right?
  18. I have finally decided to migrate my home server from windows 8.1 to unRAID. I have the Pro license, which I bought about a year ago, but never had the time to use it. The reason is - I have 130TB+ in 35 drives on my server now and I'm using Stablebit DrivePool. The biggest problem is DrivePool - it has scattered all my files on all drives. It did that intentionally - that's just how it operates. So for example: Star Trek Enterprise season 1 has 26 episodes - and each and every one of these episode files are on 26 different drives. How stupid is that? And that's how pretty much all 130TB of files are now. So the migration process is gonna be long and tedious. Here is the home server's hardware: Intel Xeon E3-1230 v2 3.3GHz BX80637E31230V2 Supermicro MBD-X9SCM-F-O Kingston 32GB (4x8) DDR3 ECC Unbuffered 1600 KVR16E11K4/32 LSI 9201-16e PCIe 2.0 x8 SATA/SAS HBA LSI00276, to this HBA a have connected 3 of these expanders: SGI Rackable SE3016 SATA SAS Expander 16 Hard Drive Bay and in these expanders I have 35 drives of various sizes: 7x WD Green 2TB 14x WD Red 3TB 14x WD Red 6TB Last week I bought 3 new WD Red 10TB drives. I had an old spare i7 pc, so I put an unRAID 6.3.5 Trial on it. I precleared those 3 new 10TB drives with a preclear plugin. Then added all 3 drives to an unRAID pool, without any parity or cache drives - just a simple 3 drives pool. And then I started moving the data off my windows server into these drives. I move the files using drive shares for now, not user shares (just to keep files together). I know unRAID can do that using user shares to, but it is simpler for now. Later on, when all the data has been moved, I will properly sort my files and will figure out user shares and their split levels, etc Questions: I was so very much pissed of by the DrivePool at this point, and was in so much hurry to start the migration, that I did not think to look this up before hand. Will I be able to simply take these drives, already filed with data and put them on my final unRAID server and add them to the new pool? I mean these drives have been precleared with unRAID plugin, have been formatted by unRAID and I will be adding these drives to the pool which has no parity... Basically, what I wanna do, is take the drives from my windows server in groups of 2 or 3, preclear them, add them to the unRAID pool and move the data of server... repeat... Finally, when all the data is migrated to the unRAID, I will make a new unRAID setup from scratch with my Pro license, create a new pool, add parity and cache drives and only then calculate parity... So is this OK? Can unRAID use drives filled with data from another unRAID system (again - no parity for now)? Copying 130TB of data over 1gig network (I'm getting ~80MB/s atm) is gonna take forever. After doing some thinking, I was wondering - could I put unRAID on my server machine right now and convert my existing windows 8.1 into a virtual machine running on unRAID? All the drives, that are needed by the windows VM would be on the same HBA. Could I pass-through the whole HBA to the windows VM with all the drives, so the DrivePool in windows could function without problems? I need Drivepool running and pooling the drives on windows, because the files are scattered all over the place, and without pooling I could not easily move the files to unRAID in proper file structure... I have a 2nd identical LSI 9201-16e HBA and an identical 4th expander box. I would use these to run the drives for the unRAID pool. These would be the destination drives. Would that be OK? If there's any problems using 2 identical HBAs, I have 2 LSI 9210-8i (IT mode) cards. I could use 1 or both of those. Depends on the answers to 2 to 4. But if I can do this windows in VM on unRAID migration - would I run into some serious problems, down the line? Because I will be removing drives from windows HBA and adding them to unRAID HBA, 2 or 3 drives at a time... This is gonna be my first unRAID and already it is gonna be with max drives - 28 data and dual parity. Not sure about cache yet (single or pool). I am wondering, how long will the parity checks gonna be? I am gonna have only 3 10TB data drives in it, but methinks in 2 to 3 years, all 28 data drives gonna become 10TB, I'm pretty sure. How long the parity checks gonna be then? I mean I've read tha it's recommended to do a monthly parity check... What would be better as parity drives - same WD Red 10TB or should I get WD Gold 10TB? As I understand parity drives are the most important, the most hard working drives and should be as fast as possible, yes? The whole reason to do this vm thing, is to avoid the 1gig LAN bottleneck. I'm not sure, but I think I could get at least 400MB/s transferring between HBAs or maybe even 800MB/s. Never tried it before, so we'll see. Thanks in advance for any answers and help.
  19. @bjp999 @johnnie.black Thanks for info, guys. And I will look into those Seagates, if their reliability has become better. Because in my own experience - Seagates were the worst, actually more than half of mine are dead already. I really can't remember exactly, but I think I have 10+ 1.5TB and 10+ 3TB dead Seagate drives. At the same time - I have like ~30 old as hell 500GB and 750GB Samsung drives - most of them still work perfectly. My go to testing drives Maybe 3-5 dead from ~30, some I have stopped using just because SMART gave warning about reallocated sectors.
  20. I have an old i7 rig (about year 2010, IIRC) I could use. How would that cope with dual parity? So unraid's attraction was, it was always running smoothly on old an not power-full hardware. Did that change with dual parity? I mean, forget docker and VMs - just using dual parity...?
  21. That is the most reasonable way of doing it, imho. I could buy another license and put up a 2nd unraid, but it's more convenient to have all the stuff in one place. Although, the 2nd one could be a backup, that's always online also. Hmm, gonna think about it some more...
  22. @bjp999 All the drives that in that list I already have, and then some. I thought about getting WD Red 8TB drives before my last purchase, but got those 10TB WD Reds. The problem lies with unRAID having a 28 drive limit in the array. Without buying some 10TB drives, I would not have been able to even fit all my data in one unRAID to begin with, as my server now has 36 drives connected. Not to mention that I have a bunch of offline drives full of stuff, that just sit in a drawer. Would love to get that data online too, but will see how it goes. I have 40+ WD Reds, the oldest of them being 3TB drives maybe 4 years old? Only one WD Red "died" on me. But then a couple of months later I connected it to windows machine - and it's OK. Go figure. On the other hand, I have only 2 HGST 4TB drives, about the same age (~4 years). One just gave me reallocation warnings, like 3 days ago Moved all data out and will remove from server on next reboot. As for the Seagate Archive drives, are you talking about shingled ones? Nah... Maybe as backup, but I have plenty for backup as it is. I mean for backing up just the important bits. No point in backing up all the media. At the end of the day, after having so many drives die on me over the years, I just feel kinda good about WD Reds, at least for now. /me knocks on wood. I'm sure you know what I mean. By now I am willing to pay a little extra for a drive, if I am reasonably sure it will be more reliable. Of course nowdays - any drive is lottery. Also, less (bigger) drives - less points of failure. Right? @tdallen Yes and yes. Many "batches" of drives bought at the same time. I had 2 drives fail in an ~24-36 hour period twice over the last 5 years. That's why dual parity was the thing that brought me back to trying unraid. I don;t even remember, when I bought a Pro license. I know I tried unraid for short periods of time like 2-3 times before. But never switched to it. But dual parity, plus Docker, plus VM with hardware passthrough - now we are talking completely different unraid Although, the migration is gonna be f***in nightmare. 100TBs+ of data in a StableBit DrivePool. That means - all the files scattered all over the drives. The main reason, why I am not on unraid already. I'm getting depressed even thinking about it
  23. @Frank1940 Yep, exactly - a couple of TBs of data needs to be backed up, but that's it. All the other stuff would take a very long time to get back, but it's doable. And then there's grey area, for example - about 5 years ago I lost a 2TB drive filled with TV Shows. That was a "special" in a manner that all the shows were "short-lived", as in cancelled after 1 season or even after a couple of episodes. To this day, more than half of the stuff from that drive - nowhere to be found. Some of it I found, but in a way worse quality, that I had before. So that one hurt
  24. I understand that Dynamix File Integrity plugin does not fix the data. I meant, that I will need it to know which files have been "badly" recovered, in that case, if some drives above the threshold give read errors. So I could know which files I need to replace, which is easy, as most of them are media. As for everyone everywhere constantly repeating about having backups... With all due respect... I bought unraid for the simplest reason, that it wastes the least amount of drives for a reasonable protection, and if shit hits the fan - I'm still left with most of my data NOT gone, as in raid or freenas. And most importantly - if I had the money to waste on the drives just for having a backup of 100TB+ media files, I would not be here I would be sitting on my yacht and my personal IT guy would take care of all this. @bjp999 what are the chances of there being two of you? Edit: Oh, I forgot to mention - all of my data drives are/will be WD Reds: 5x WD Red 10TB 15x WD Red 6TB 8x WD Red 3TB For parity I was thinking, maybe WD Gold 10TB? or Reds would be ok? On the very first SMART warning, I will be replacing the drive with a WD Red 10TB, or maybe there will be something larger by then. If a 3TB drive fails, I still have 7 of those "left over", so maybe will use those, but any 6TB or 10TB gets replaced with 10TBs.
  25. @Squid Awesome! And that's why I never did hardware raid, or why I'm reluctant to use Freenas... All I need for that is Dynamix File Integrity plugin or is tehre something better?
×
×
  • Create New...