Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Onboard ports are basically on integrated "card".
  2. Would this work? Share the source folder on network as SMB Set up unRAID share that is hidden and protected (i.e. not even read-only) Use Unassigned Devices to mount the SMB source folders as mnt on unRAID Use Dynamix Scheduler to run periodic rsnapshot to backup the UD mnt to the protected unRAID share Use Crashplan to back up the unRAID share to their cloud I found the limitation of my previous Crashplan-only arrangement is that I don't have a readily-usable mirror and restoring is a very long and painful process. Hence, perhaps having a local mirror + off-site backup is better in term of "quality of life" kinda thing.
  3. Are you on beta? If so increase your md_num_stripes (officially-approved workaround) and wait for next beta (LT said they think they know where the bug is).
  4. Totally agree to the bolded part. I just went on holiday for a week and they found a fix! Should be on holiday more often. Also to note is the workaround for the freezing did not work 100% with me so here hoping LT fixes will be perm 100%.
  5. Oh dear! This just gives me extra incentive to build my proper unRAID server this weekend. Scary, very scary!
  6. Yes, potentially hardware specific - or not. I don't think even LT knows at the moment. Correction: LT has said: "we are pretty confident we have discovered the bug causing deadlocks and system hangs and are in the process of testing patched code now before rolling out a new release" <-- kudos!!!! Jumping straight into beta is neither a good nor bad idea in itself. I jumped straight into beta myself but on a test set up. On my actual server, I'm planning to use 6.1.9. You just need to be very clear that you are using a beta, hence, be peaceful with living with the bugs.
  7. First time I heard network settings need to be changed to install a new drive. What was changed exactly?
  8. I would put a big caveat on gridrunner's reply. There is currently a bug with beta that causes the array to hang under heavy transfer - which, for unknown reasons, seems to be worsen with VM. All the responses I read so far from Lime Tech suggests they still haven't pinpoint where the issue is yet. The temp fix is to increase number of stripe which seems to work for others (not me, unfortunately, it just postpones the hang / makes it (a lot) less common, not cures it completely). This bug was not reported in 6.1.9 as far as I know. There's a reason beta is beta.
  9. A few points. unRAID 6.2.0 beta is still buggy. Get the Samsung SM951 AHCI version so you can use stable 6.1.9. In both benchmark and real-life, SM951 AHCI is comparable to the 950 PRO. Do NOT mix RAID with unRAID, especially RAID 0. In fact, don't bother with RAID 0 at all. For SSD, real life performance improvement is imperceptible. The risk of RAID 0 is just not worth it. And I'm speaking from personal experience when a RAID 0 failed out of the blue despite both drives later tested perfect with zero SMART error. The "but it has never failed for me" justification is a fallacy => it hasn't happened to you because it hasn't happened YET => until it does! I urge you to reconsider watercooling. You are having unRAID build so presumably it would be running close to 24/7 (otherwise, you are better off with other solutions e.g. SnapRAID + DrivePool). Watercooling just introduces more points of potential failure. Your system will crash within a few minutes if the pump fails, for example. Air doesn't fail. Even with a dead fan, a passive Noctua heatsink last longer without crashing your system. And under idle / low workload, it can be practically forever. Most high end server motherboard has built-in (2D) non-Intel graphics (usually Asmedia IIRC). Check the spec. There's no need to get a separate GPU for unRAID console since there's never a need for 3D. Be careful that some motherboard is not EATX but SSI-EEB (same size as max EATX but different standoff locations). Some cases advertise support for SSI-EEB but it just means you have to remove a few stand-offs (so the board is held on with fewer screws). Some cases don't mention SSI-EEB support but actually is compatible (or partially compartible i.e. by removing some standoffs). Some cases don't mention SSI-EEB and actually NOT compatible. It is a confusing situation. Get 80+ Titanium PSU if you want to run 24/7. ... yeah probably that's it.
  10. Depending on how you set up the shares, it is entirely possible for some episodes of the same TV series to be on a different disk.
  11. Beside power consumption, is there any other reason?
  12. The hanging array only seems to happen to some people (as the devs have said they just can't reproduce it in the lab). They have proposed the workaround to increase the md stripe attribute and it seems to fix it.
  13. You're using beta? There is a known issue with hanging array which I think is your issue, give the clue that you still can telnet in i.e. as long as it's not querying the array, it's fine.
  14. Turn on turbo write. You may see better real life impact.
  15. Good to know that the temp fix works. md_num_stripes is explained here Summary is the higher, the more active pieces of 4k IO that can be active simultaneously. So increasing it suggests there is something with unRAID processing which cause a lot of stripes to become stuck in active state. Considering this wasn't reported with 6.1.9, it has got to be the new codes. I would connect the 4k IO dot to the NVMe support but that's 1 man's 2p.
  16. I have the same problem but with my GPU. If the screen has a lot of activities, I can hear buzzing noise coming out of the soundcard. I think it's electromagnetic interference so other than repositioning your soundcard / cable etc away from the source, there isn't much more you can do. Another possibility is the inteference is affecting the usb card which in turns affect the signal being sent to the soundcard. That would be very difficult to fix but repositiong the usb card to another slot might help.
  17. In order words, Crashplan provides pretty awesome "Swiss cheese" layer against ransomware, in addition to any local NAS arrangement.
  18. Ah sounds like the dreaded unexplainable hang that have been reported several times already on this thread. It seems dAigo and myself have had it and now you (and a few other). The devs unfortunately cannot reproduce the issue so it's pretty much either you have it you don't kinda situation. Next time it hangs, can you try using another PC and try pinging your server? What I have found in my case is even though everything appears unresponsive, the server still returns pings (and things that are running without querying the array, e.g. top command in console, will continue to run without any issue). It appears to be array-related. Also, please would you mind sharing your configuration?
  19. Hmm... that sounds a bit similar to what is happening with me on 6.2.0 beta 21. I managed to pin-point that it's due to something hanging in the array but couldn't figured out what. dAigo seems to have a similar issue too. You might want to try 6.1.9 to see if it works.
  20. I'll be the devil's advocate here and suggest you consider an alternative solution: SnapRAID + Stabblebit DrivePool Pros: Run in Windows so you can have all your familiar Windows apps i.e. no need to touch Linux or VM pain Drives in pool are formatted in NTFS. So if there's a problem, the drives are fully accessible by any Windows machines without any additional software requirement Parity is run at interval (or whenever you want), reducing write penalty No need to deal with pass-through, VM, docker etc. e.g. no need to worry if you need "VT-d" and stuff to run VM. You can "undelete" files very easily, even if files were saved on multiple drives. Cheaper overall and definitely a lot cheaper to test. SnapRAID is free + DrivePool is free to try and only $30 to buy. unRAID costs a minimum of $60 for 6 drives => see my moaning below for some more info about the "Trial" Cons More difficult to set up initially e.g. SnapRaid is a command-line tool The pool is not protected until parity is run - risk of parity being out-of-sync with data. The above excludes things I know unRAID doesn' have natively but can work via plugins / scripts (e.g. CRC checking, Unassigned Devices etc.) The moaning: unRAID includes ALL storage devices in calculating license requirement (i.e. not just drives in array). That makes the "Trial" version useless for anyone with established storage arangments to try it out. I have 4 HDD, 5 SSD and a 6-type card reader. That shows up in unRAID as 15 devices = "Pro" license. Hence testing unRAID is an ordeal, involving deactiving SATA ports + M.2 + PCIe storage in BIOS - and then reactivating them if I want to use my PC again. Information for unRAID is highly fragmented / outdated. Asking the forum is a lottery really => just check out the number of questions without any responses. Not saying it's any better with SnapRAID + DrivePool but they don't require as much tweaking, in particular with passthrough and VM. It's the tweaking to make things work that is the most painful in my opinion.
  21. I think it is an established fact that nobody (even LT) knows when their beta will be stable. In the mean time if you want unRAID, get the SM951 AHCI (which should still be selling for cheaper than 950 PRO). It has been proven by reviewers that the 950 PRO doesn't offer any significant real life improvement over the SM951 AHCI version (or even the SM951 NVMe version). The usage that a typical Joe uses, you will barely even stresses it.
  22. Short is don't do that. unRAID still does not support RAID 0 out of the box so you are likely to run into troubles. Just mount either of the SSD out of array. Btw, the notation is: g = conventional giga = 10^9 (i.e. 1000 x 1000 x 1000) G = computer giga = 2^30 (i.e. 1024 x 1024 x 1024) b = bit (1/8 of a byte) B = byte So in your case, the notation is 120gB (or 111GB) 111Gb is 2^30 * 111 bit = 13.9 GB
  23. I think you can install dockers on SSD via unassigned devices. I tested it briefly and it seemed to work on the surface but did not do any deep tests. And with Turbo Write turned on, your array performance will be reasonably fast (I can get north of 100MB/s, even on the "long stroke" ends of the various drives) so why bother with using small-capacity HDD as cache anyway? It will just occupy 2 device slots (so you potentially need a more expensive license) for marginal gain. If you already have the £££ to buy the more expensive license then why not just buy SSD to use as cache instead? Dockers, VMs and cache all can share the same SSD / SSDs. There's barely any performance hit (my test rig uses a very old Kingston 128GB SSD => how old? it's SATA2!). I think if cost is a concern for you then to be honest, you will save more by just using SSD as cache and put VM + dockers on it since you are gonna get an SSD anyway. And if the cache is full (because of dockers + VM), unRAID will just automatically write straight onto the array.
×
×
  • Create New...