Search the Community

Showing results for '#ssdindex'.

  • Search By Author

Content Type


Forums

  • Company
    • Announcements
    • Pre-Sales Support
    • Unraid Blog and Uncast Show Discussion
    • Unraid Polls
    • Forum Feedback
  • Unraid OS 6 Support
    • General Support
    • Bug Reports
    • Security
    • File Sharing Protocol Support
    • Feature Requests
  • Unraid Connect
    • Connect Plugin Support
    • Feedback
    • Feature Polls for Connect Users
  • Application Support
    • Docker Engine
    • Docker Containers
    • VM Engine (KVM)
    • VM Templates
    • Plugin System
    • Plugin Support
    • Programming
  • Community
    • Lounge
    • Guides
    • User Customizations
    • Unraid Compulsive Design
    • Hardware
    • Motherboards and CPUs
    • Storage Devices and Controllers
    • Good Deals!
    • Buy, Sell, Trade
    • Vendor Forums
    • Virtualizing Unraid
  • Multi-Lang Forums
    • General
    • Deutsch
    • Chinese / 简体中文
    • French / Français
    • Spanish / Español
    • Portuguese / Português
    • Dutch / Nederlands
    • Italian / Italiano
    • Arabic / عربي
    • Polish / Polski
    • Swedish / Svenska
    • Korean/ 한글
    • Ukrainain / український
    • Norwegian / Norsk
    • Japanese / 日本語
    • Russian / русский
  • Legacy Support (unRAID 5 and Older)
    • General Support (V5 and Older)

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Member Title


Gender


URL


Location


ICQ


AIM


YIM


MSN Messenger


Personal Text

Found 20 results

  1. With the recent update on this topic, I thought I would mention how I almost always do drive updates. I am not a videographer as my friend @gridrunner is, but I'll lay it out in concept. It is very uncommon for me to have disks fail. I think in my 10+ years as an unRAID user, I've only had 1 fail once in the array. Even seeing SMART attributes start to increase is rare, and never been catastrophic because I have an early warning system. What happens far more more often is that my drive count gets larger and larger and I reach a point where I'd like to add more capacity and I don't want to add yet another drive. For example, if my array contained 4 3T drives, 3 4T drives, 1 6T drive, and 1 10T drive (+ 10T parity), I might say those 3T drives are looking pretty small and kind of a waste of slots, and might have been in my array 5 years, so their chances of failing are on the rise. I could replace 3 of them with just a single 10T drive. And If I added 2 10T drives, I could replace all 4 and have at least 8T more space + 2 slots to enable me to easily grow my array. That might keep me fat and happy for several years. In order for this strategy to work, I'd need a way to add the 2 10T drives to the server temporarily. I advise to always have at least one slot free (or easily freed), as there are many failure scenarios that require an extra disk. If I don't have the slots, I will sometimes remove a cache SSD and/or UD temporarily, or jerry-rig some type of external (as simple as laying the drive on the floor, label down (like a turtle), and cabling it to a controller and power). This can work fine for this short (few days) operation if you can ensure that no person or animal is going to disturb. And disable any VMs / dockers that need the drives you've disconnected. (There are also some hybrid methods that would require only 1 (or possibly no) extra slots). I first preclear the new disks. (There have been some debates about not preclearing, but I think it is valuable and always do it. I know that there have been issues with the plugin [which I was not involved in except I did allow @gfjardim to bundle my "fast preclear" script version in his plugin]. If anyone wants a functioning version of the fast preclear script that is easily run in a screen session, feel free to PM me.) In parallel I'd sometimes do a parity check on my array. Before starting, I look at the smart attributes on all the drives, and after finishing I look at them again. If I am seeing issues with reallocated sectors, or other attributes that are starting to rise, I might reconsider how I do the upgrade. But mostly the parity check comes back clean and the SMART attributes are fine before and after. The fact that this overlaps with the preclears typically means no lost time. I might omit the parity check if one was already done recently and all is clear, or if I am concerned about one of the array disks. A full disk read (parity check) could weaken the drive further. Such a scenario might cause me to rethink the parity check. (But I still have parity protection so even if the disk failed, I'd not lose data). I then configure the new disks as UDs. Format them. And using Krusader, midnight commander, cp / rsync in a "screen" session, or some other method, copy the data from the 3T disks to the 10T disks. Note I can copy data very fast because parity is not involved. I can also copy to both of them in parallel (from different 3T disks) with no slowdown. (If I try to copy data from 2 3T disks to the same UD, then it would slow down.) So if I copy 2 of the 3T to one 10T one after another, overlapped with copying the other 2 to the other 10T, it would take the same time as a full-speed copy of 6T (8-11 hrs). I will mention that this approach has a few other benefits: I have a lot of flexibility in terms on what data I copy to what 10T disk. If I have data from 2 shares that have organically grown onto the 3T disks, I might like to dedicate one of the 10T to one share, and other other one to the other share. I have a lot of flexibility to put the data onto whichever disk I want - or even to another disk in my array that has some available space. If I am doing a lot of this, I'll often create a shell script to do the coping so I account for all the data. The bathtub curve for disks failures points to higher than normal failures in the early use of the drive. I've observed this. Often the preclear finds them. But if preclear works and I see subsequent SMART issues or worse, read or write errors on a 10T disk, I can prevent it ever entering the array. I can just remove it, return it, and get a new drive. My array continues to operate and is fully parity protected. If I have a desire to change the filesystem, this is a perfect time to move to XFS or BTRFS. Copying file by file means I am not stuck with RFS on my 10T disk like a rebuild of a 3T RFS disk. Although not a huge problem, this method defrags the data. Using unRAID to rebuild a disk is going to take a mirror image. I already mentioned that this prevents moving to a different file system, but similarly, if the disk is fragmented due to a lot of adds, changes and deletes, the rebuilt disk will be identically fragmented. So once the data is all copied to the 10T drives, I can do a new config, omit the 4 3T drives, add the 2 10T drives, and rebuild parity. (I can then physically remove and keep the 4 3T drives as backups). There is an very very small risk of a drive failing in building parity, and if you are worried about it there are ways to protect yourself. (If there are questions I can elaborate). But after doing a parity check just a day or two earlier, the chances are very low and I tend to just let it run. I have my critical data backed up, and any number of very unlikely risks - burglary, fire, PSU failing in a nasty way, etc. that I am accepting. I put this risk in the same probability as those. Final step is to review your user shares and adjust include/exclude settings. With this method, you are not only adding more data now, you are freeing slots to be able preclear and add new disks for the next 2 drive updates. And you're also able to remove all of the smaller drives from the array, which will normally have a very positive impact on parity check times. (#ssdindex - Adding space to an array by replacing group of smaller disks)
  2. Yes - seems the wiki should up updated to make this more clear. Early on I did a lot of work on the Wiki, but most of it has been rewritten or new content added, and I'm not sure who has the responsibility now. @jonp might be able to funnel this request to the right person to do the update. Good you didn't do the new config yet. One thing you need to know and understand is the user share copy bug, which I discovered a long while back, and due to technical reasons, cannot be fixed by LimeTech as I had hoped. Steps have been taken to avoid this issue, but your situation is particularly susceptible to encountering this bug. Here is what appears to be a perfectly valid thing for you to do, but this WILL result in losing a lot of data in a hurry. (Again, do not do this!) Remove the failed disk from the user share configuration. Then copy all the data from the disk share folder (e.g., /mnt/disk4/Movies to the user share (/mnt/Movies) The reason this will not work is that disks excluded (meaning explicitly excluded or not included [you should only use included or excluded, not both BTW]) does NOT truly exclude the disks from the user share except in one very specific use case. If a file is copied to a user share which overwrites a file, that file will be overwritten on whatever disk it is present, even if that disk is excluded in the user share. And even if the file really is a new file, if the split level will force that new file onto a specific disk, even if that disk is excluded. Only if a new file is being copied that does not exist and split level is not impacting its placement - only in that situation do the excluded/included disk configurations to come into play. Also, when you browse to a user share, it is going to show content from all disks in the array that contain the root level folder for the user share, regardless of the include/exclude share configuration. The only way to stop these user share behaviors is to globally exclude a disk from the user share feature. Once you do that the disk will be ignored for anything user share related. You would not be able to have any user shares on that disk. So if you copy (or move) a file from the user share directory on a disk share to the user share, the user share will think you are overwriting an existing file. So you are basically trying to copy a file overtop of itself. Normally the operating system prevents you from doing something like that - you'd get a "can't copy a file to itself" error and the OS would prevent it before it tried. But in this situation, the OS does not realize what the user share is doing under the covers. And it will not prevent the operation. So it will try to copy the file, and immediately clobber the source. With the source gone, the copy fails, and the contents of that file are lost. Say you are copying (or moving) 500 files that take up 1T of source files, you might think you'd somehow realize what was happening and stop it. But in truth unRAID would wipe out those 500 files very very quickly. Only the first block of each would be attempted, the copy would fail, and then on to the next file. A rule of thumb is to always copy disk share to disk share, or user share to user share. Do not mix. But I'll give a tip that would allow you to safely copy from a disk share to a user share. Go to the disk share, and RENAME the root level folder. Say it was called "Movies". Change it to "X" or "MovieTemp" or anything that is different from "Movies" and not the name of some other user share. This will instantly separate the files on that disk from the "Movies" user share, and temporarily create a new user share with the name you gave. You also need to make sure that the user share configuration excludes (or does not include) that disk. You can then copy from that disk share to the user share. Or, copy from the user share "X" or "MovieTemp" or whatever you call it, to the "Movies" user share. This would result in all of the Movies being copied to one of the currently configured disks in that share. It is not necessary to Move, Copy is fine. The disks is being simulated, and when you reconfigure you array, any files on that disk will be poofed out of existance. Moving requires deleting the file on the simulated disk - which would unnecessarily waste time - potentially a lot of it. Post back with any questions. (#ssdindex - User share behavior, user share copy bug)
  3. Obviously no solution is going to meet everyone's needs. But I offer the following points for your consideration: 1 - At its heart, unRAID is a NAS server. It will protect an extremely large array (over 300T with unlimited license) from most single (or dual if dual parities are installed) failures. This is real-time protection, which is continuously updated as files are added/changed/deleted. Disks can be of arbitrary / mixed sizes. Many here on the forum found the core NAS features alone (before docker, VM, or dual parity features existed) to be well worth the cost of admission for unRAID. 2 - Opening your server to remote access represents a security exposure. There are VPN solutions available for unRAID to enable this feature, but do require some effort and understanding to get fully enabled. But there are other options that work for most users that do not require VPN. Consider Teamviewer (free for personal use) which provides secure remote access to a Windows box (VM or physical). Once connected, it is like you are sitting at home at that computer and on your network. It is very easy to access the array, and transfer files. Plex (free with Plex license) has a remote access feature that allows media to be accessed remotely. These two remote access features are very easy to configure even for the least technical user, and support the lion's share of use cases for those interested in gaining secure remote access to their server without getting into setting up VPN. 3 - If you look at the cost of a QNAP or Synology solution to support 30 disks, you will be spending a lot of money if it is even possible. I just looked at a server that supports only 8 3.5" disks at a cost of $1200. It includes a low power CPU inadequate for anything processing intensive. These are all-in-one hardware/software solutions, so license is embedded in the cost. It is part of the purchase price of a server upgrade. An unRAID server can be built very economically. All it takes is a basic computer, a $50 controller card, a couple of drive cages, and an unRAID license to have a robust setup for 10 drives. An unRAID server of the size and power of the $1200 unit mentioned could be set up for 1/3 of that cost, or even less. A $1200 unRAID server would be a very powerful unRAID server, or come populated with drives. 4 - The unRAID license is a one time cost, and easily moved to an upgraded server as storage needs and/or more horsepower are needed. The licenses are also upgradable to higher capacity without repurchase. No unRAID user has ever had to pay for an upgrade to gain access to new features - even with the adding of Docker, VM, dual parity, and disk counts increasing (for full license, the max disks have increased from 12 to 30 since I have owned). And when you consider the cost of the license against the hardware and drive costs in setting up a server, the cost is a very very small percentage. The ability to protect against drive failures at the cost of a single drive, means the most economical redundancy available. 5 - With the inclusion of VMs, many users are able to virtualize their workstation(s), gaming rig, media player and other physical machines to all run as VMs on their unRAID server. This is a very attractive feature for many, myself included. I am typing this into my Windows VM running on my 12 core i9 unRAID server with KVM hardware passthrough. And this is also supporting Plex which can transcode (even processing intensive 10 bit/4k HEVC video in software), with only about 30% load. Oh - and this is running on my unRAID full license that I bought in 2007 and have never needed to upgrade through 3 major server rebuilds and countless upgrades. Certainly there are other NAS options in the marketplace, but I consider unRAID to be the most compelling platform available today. It provides parity protection, low entry cost, wide hardware compatibility, mixed drive sizes, perpetual license, and access to an expanding set of Docker apps for most every purpose under the sun. VM feature is robust and easy to access for folding multiple physical machines into VMs running on the server. You may find lower learning curves on the all-in-one solutions, but the training wheels come at a high price and are very limiting. One final comments on the forum support. A conscientious set of very knowledgeable users, virtually all volunteers except the 3 LimeTech employees who rarely engage in day to day issues unless an email is sent to LimeTech directly, provide a high level of support to users with questions or problems. Complaints like yours are exceedingly rare and quickly rectified if someone is getting frustrated and posts in the forums. You are certainly welcome to go elsewhere - but hope this helps explain the unRAID value prop as compared to other offerings. Best of luck! (@ssdindex - UnRaid vs QNAP / Synology?)
  4. I understand the exhaust part. It's your air intake I don't understand. There are no fans in the upper back of the case I can see. Cool air will NOT just waft into the case from above without a fan driving it down. And you are working against the natural tendency of warm air rising. Your layout is much more likely to result in hot air accumulating at the top of the case with nothing much to force it out. And that hot air will recirculate and get hotter, reducing your cooling. How is the exhaust from the CPU on the right getting out of the case? Looks like it is blowing into the back of your drives?? It will tend to rise to the top of the case, as mentioned, will get recirculated back into the CPU coolers. Take a look at this https://www.howtogeek.com/303078/how-to-manage-your-pcs-fans-for-optimal-airflow-and-cooling/ it has a lot of info on case cooling, positive vs negative pressure, etc. Here is what you are trying to achieve. Cool air coming in from low front (could also come in from bottom depending on the case), and hot air going out the back and top. Perfection is impossible, If you turn the fan on the CPU on the right to point up, and added an exhaust fan up there, while having a couple of intake fans on the front to get cool air into the case, I think your cooling would be much more effective. #ssdindex - case cooling
  5. Shares are nothing more than groupings of files that are on the same named root folder of the array disks and cache. Include / exclude setting are ignored. So if you have a "Movies" folder on disk1 and a "Movies" folder on disk2, and then configure the "Movies" share to exclude disk2, you will still see the files on both disk1 and disk2 in the share. When a file is written to the share, if that file already exists, it will be overwritten. Even if the file exists on a disk that is excluded (or not included). When a file is written to the share, and based on your split level rules, its directory is non-splittable and that directory already exists at or beyond the split level, the file will be added to that subdirectory on the disk. Even if that disk is excluded (or not included), goes against the allocation method, or even if there is not enough room for the file. When a file is written to a share and it does not already exist and the split level doesn't force it to a specific disk, the include/exclude settings (as well as the the allocation method) come into play. unRAID will put that file on an included / not excluded disk. There is a global user share setting that can totally exclude a disk from user share functions. If a disk is excluded in this way, it will not appear in any user shares and never be written to as part of a user share. (#ssdindex - user share rules)
  6. If you have 2 parity drives and two drives fail at the same time, you can rebuild them both. That is a pretty unlikely scenario if you are keeping on top of your drives' health. Slightly more likely is you have a drive fail and while rebuilding it a second drive has a read error, and the second parity can engage to make sure that you do not have corruption in the rebuild. This has actually happened a few of times in unRAID history. And if you have a flood, and are lucky enough to only loose 2 drives, dual parity is helpful (happened once or twice). If you don't have hot-swap drive cages, it is QUITE likely that when you have a drive fail, and while swapping out the disk with a new one, you will knock a cable loose on another disk. If a second drive drops offline (even it it didn't actually fail) while the first drive is building, the second parity allows the rebuild to complete anyway. And then afterwards you can fix the second failure, which is likely a loose cable. (You could have recovered from this second failure with some effort, but the second parity makes it much easier and less likely you'd have any corruption as a result). If you have hot-swap cages and have been around the block with unRAID, the second parity protects you from such an unlikely scenario that having it is of little value IMO. You are exposed to other low probability risks like fire, theft, falling Chinese space stations, etc. These are as likely to cause total loss as limiting loss to just two. If you didn't devote a disk to second parity, what could you do with those funds or that disk? One - you could buy some of those hot swap cages. They are extremely useful in preventing the almost inevitable cabling snafu that happens when you stick your fat hand into a jumble of cables to unplug one. And swapping a disk is a 5 minute exercise. With the cages, it is the cages that are plugged and the drives go in from the front. You never touch the cables and never have a risk of them getting jostled (except on the rare occasions you are installing a new motherboard or another cage or moving to a different case - things that happen very rarely). If you can't tell I think they are a necessary investment - and would come before second parity if it were my money. A second parity disk could easily run $300, and for that you could get 5 5in3 cages. What else could you do? You could use that second parity disk as a backup disk. Backup the most valuable data that you say is backed up but you know the backup of woefully old and incomplete. A 8T, 10T, or 12T disk would back up all of your non-media files, and even a lot of media too. Take it to your parent's/sibling's/friends house for safe keeping. That backup would protect you from virtually anything that you would survive. So you have good backups, you have drive cages, and you have this extra disks you are doing nothing with? Second parity is a great idea. Or say you are a new user and subject to making mistakes? A second parity would protect you from some of them? Or say you have a maxed out array of 20 or 25+ drives. Maybe then it makes sense to have that extra parity. That's my $0.02 on dual parity for what it is worth. (#ssdindex - Do I need dual parity?)
  7. Thanks for taking the time to post this guide! I haven't tested it myself, but it looks detailed and comprehensive! (#ssdindex - GPU Passthru Guide for overcoming Error 43)
  8. I would draw your attention to the 3 columns named "Value", "Worst", "Threshold" in your screenshots above. These are called "normalized values". The Value is the current normalized value. The Worst is the lowest (worst) value the attribute has ever been at. The Threshold has nothing to do with your specific disk. It is the level at which the drive itself will start reporting it is failing. If you look at the smart reports, the raw read error rates show Value Worst Threshold D1: 79 65 6 D2: 83 68 6 So this means both disks have gone down into the 60s, but it isn't until they get down to 6 would the drive would consider itself failed. Generally the nominal "normal" is 100. These are below that. But every manufacturer and even model has different rules, and your values of 79/83 for value and 65/68 for worst might be perfectly normal for these disks. My experience is that the smart values themselves are not that useful. It is really the delta in smart values that are interesting. For example, if you told me that 2 days ago, the value was 100 and worst was 98, and now the value is 79 and worst is 65, I'd be concern with a rapid drop and closely monitor. If you kept this as a baseline and looked at the attributes from time to time for sudden drops, you would be able to more closely track the health of the drive across the various attributes. The attributes Frank mentions above are the ones we typically associate with failing drives. And we look more closely at the Raw values than the normalized ones for those specific attributes. And we are pickier than the drive manufacturers when some of the Raw attributes start to increment. Hope this helps. Enjoy your array! (#ssdindex - Smart attributes - value, worst, threshold)
  9. Ok, what is a mount point. Let's start with drive letters. When you add a drive to Windows, it gets a drive letter. The first disk is c:, then d:, etc. Linux doesn't work like that. Each new drive will get a device name, like /dev/sde, and that can be used to partition and format the disk. But in order to use the drive to add new files or access existing, it has to be mounted. Since Linux has no drive letters, instead a disk needs to be mounted before you can access its files. You might think since the disk is /dev/sde, you could create a file called /dev/sde/my file.txt. No, doesn't work like that. The disk is a closed book until it is mounted. To mount a disk, you create a folder, and then mount the disk there. So, for example, you could create a folder called /edrive. And mount /dev/sde there. Now by going to /edrive you can add new files to the disk and access existing files. The folder /edrive is its mount point. You could then unmount it, which closes the book so to speak, and then mount it some other place. Like /mnt/mydisk. The /mnt folder is by convention the location most disks get mounted. UnRaid mounts all your data disks there. E.g., /mnt/disk1. Now disks are not the only things that can be mounted to a mount point. Suppose you are accessing a remote computer, and want to access its files. In Windows you might refer to a remote share as //mycomputer/sharename. But in Linux you would need to create a mount point and mount that remote share there. You might call it /mnt/sharename. Once mounted, you can access its files. So what else can be mounted? UnRaid has this concept of user shares. They are not real disks, but they sort of look and act like them. So unRaid gives them a mount point and you access them by that mount point. E.g. /mnt/user/Movie. All user shares are in the /mnt/user folder. So let's talk about Dockers for a second. Dockers are interesting little prepackaged units that contain Linux applications. A docker cannot see the disks you have mounted. In fact, a docker has a completely different "root file system" and cannot see anything on your unRaid server. So it can't be mischievous and access files it is not supposed to. But Dockers are built to expect the user will "map" folders from the server file system into the Docker's file system. So the docker can be built to expect its configuration files are in a folder called /config. And you can map your real folder called /user/appdata/appname to the Docker's /config folder. So whenever the docker accesses /config, it is really accessing your appdata user share for the docker. These mappings are called volume mappings. Don't confuse them with mount points although some of the concepts are similar. So what if you didn't, and just left the config folder unmapped. What would happen? Well it would appear to work fine. All of the config data would be stored inside the docker container. But when the docker gets updated in a way that resets the container, all that configuration gets lost and you are back to ground zero. So anything you expect to persist needs to be mapped from locations on your server. The volume mappings vary from docker to docker, but most all have a /config mapping. Getting the mappings right is key to getting Dockers working correctly. Hope that helps. (#ssdindex - mount point basics)
  10. @MrOnionSkins - For any motherboard upgrade, suggest that you burn in the new system thoroughly, including memory testing. You definitely don't want to subject your data to risk of a bad motherboard or memory. While a motherboard swap can be simple for a NAS only array, if you have VMs with passthrough GPU and/or USB controllers, these will not move over smoothly. Advanced server tuning can also include configurations that won't move over. And even if you don't use these features, there are other considerations beyond just the motherboard swap. I'll try to cover some of these topics and considerations. If you have VMs, you really need to back them up, get them working without passthrough (using VNC or something like NoMachine), and configure the keyboard without passthrough. You also need to remove passthrough directives and tweaks to your syslinux configuration. Get these working on the current motherboard and they will move over better. If you have done tweaks to your VMs and/or Dockers limiting/specifying cores, you would need to consider those in the upgrade. The core matching may be different on the new CPU. Your logical core count may have even gone down (e.g., moving from a quad core i7 to a hex core i5, logical core count could go from 8 to 6). Not sure how KVM or docker would react to referencing non-existent cores. As was mentioned, if you don't have a backup server, an upgrade is a good opportunity to set one up. You can set up the new motherboard in the backup server, do the burn in, and get everything figured out in terms of passthrough and tuning, and then do the case or drive swap. Depending on what your server does, like host your Windows workstation, logical router, downloader, media library, even basic TV, whatever - it may be desirable to be able to bring up the backup server as the primary server so critical functions are not impacted. There is no cookbook here, but something to consider. I'm actually working on this with an upgrade I am working on, so my old server is both a data backup as well as redundant server for critical functions. You also need to consider if you are upgrading your cache to NVMe or new SSD. You'd need to transfer the contents over. If your USB stick / cache contents are old, going back to older versions of unRaid, or even if your Docker configs are old and unnecessarily complex, you might consider starting from scratch and rebuilding the stick and cache and Dockers. The setups and defaults have changed over time. This can also be an opportunity to rethink your library setup in Plex for example. Starting clean eliminates the hodge podge, and uncomfortable feeling of not knowing what is real and what are remnants of a prior generation. If this is your plan, you might think of standing up the new server, getting everything working, and as the last step moving just the disks to the new server. And your current server runs largely unchanged and ready for backup duty. So an upgrade can be very straightforward, but can also be pretty involved. There can be quite a lot to think about. Give it some thought so the cutover goes smoothly. Feel free to ask questions as you consider your upgrade. (#ssdindex - Server upgrade)
  11. Those are all VGA (analog) 1080p. If you read some of the reviews, you'll see there are complaints on image quality. VGA cables at 1080p lose brightness and focus. And just to repeat - any of these Ethernet extenders require a dedicated point-to-point Cat5e+ cable that does not go through your router. A single run from the server to near the monitor. You can't just run Cat6 to your gigabit switch on both sides and expect it to drive the monitor to your computer! (I didn't understand that when I started looking - you guys obviously do. But just communicating for other readers) This one looks much better for 1080p. It is HDMI. Not a bad price. Lots of buyers. And good reviews. Might be issues with HDCP / HD audio passthrough. Need to read carefully and do homework if media playing is part of your plan. But if you want 4K HDMI capability with HD audio (DTS-HD, TrueHD, ATMOS, etc.) passthrough, none of those will work. HERE is the BlackBird 4k that I had seen at Monoprice. Cat6 recommended. Pricey at over $300! But if you read, even this premium unit has failures and complaints of the heat. HERE is one I found on Amazon. "gofanco" is not exactly a household name, but it is much cheaper at $106. One user reporting having a problem. He notes how "unbelievably hot" it gets to the touch. The report below chronicles his issue. They did replace it and he was happy after that. But I would be worried about any piece of electronics that runs that hot. It must be doing massive signal processing to convert the 18.6 GHz video signal from HDMI to Cat6 in real time. If I bought one of these Ethernet boxes, I'd want a fan blowing on it so it didn't burn itself out! As I've said, I went with HDMI. This was due to the expense, heat issue, potential high failure rate, potential HD audio issues, and fact this would not run through my network (which would have let connect from other locations in my house). One of my goals in moving the computer out of my office was to get the hot computer out, and the idea of a hot coal on the floor near the monitor was not appealing! HERE IS THE ACTIVE HDMI CABLE I bought. Mine is 50ft for $45. Small diameter. Video is crystal clear. No transformer or electrical connections. No HD audio passthrough limitations or complexity. No hiccups. No heat. If you are running HDMI cables, recommend one or two of THESE USB EXTENDERS too. I have 2 - one for Logitech Unifying receiver and one I got for FLIRC, but wind up using it for plugging occasional USB stick. Zero heat. No power connections or transformers. An alternative to use a software (Synergy) mouse sharing feature was mentioned. I looked into this but passed because it required A COMPUTER to plug keyboard and mouse into. I don't want another always on computer. And it requires a software install that my company doesn't allow (so won't allow me to use a shared keyboard/mouse with my work laptop). The software typically allows a shared clipboard function which is inherently insecure. And I am not at all sure I want my keystrokes going onto my network. Who knows how well these not-very-popular tools are secured or if they phone my keystrokes home. I found a better option for sharing keyboard and mouse for my needs. HERE is a KVM I use just for keyboard/mouse part. The HDMI video runs straight to the monitor (HDMI) from the computers. This allows me to switch between 4 sources. What's special about this KVM is it works with an IR remote. I have an old Harmony 650 with 4 buttons programmed to simultaneously switch the monitor input and the KVM input for my VM, work laptop, Surface (my backup computer when the VM is down, used mainly to bring down the VM and back it up from time to time), and one spare port for ad hoc use. The KVM is pricey but worth it if you are frequently changing video inputs. I tried lots of cheap options with mechanical buttons. Ok for occasional use, but if you are switching more than a few times a day, you won't be happy. And your finger joint will start to ache after a short while! BTW, the ConnectPro people change extra for the remote. I found the codes posted on some obscure forum, and sent them to Harmony and they added them to the Harmony database! So no need for the $25 or whatever they charge for their remote. I could not find a cheaper KVM with IR (in fact the UR-14 I linked is an excellent price for that unit that required some digging on Amazon to find.) Two negatives on the ConnectPro. One is it is rather large (approx 11"W, 2"H, and 5.5"D). And the IR is on the front so it has to be placed such that both the TV and the KVM "see" the IR codes from the Harmony. Mine peaks out under my large monitor and I hardly notice it, but you might give some thought. There may be a way to extend the IR somehow. But I think you'd have to take it apart which I'm not ready to do. There is no IR input. The other negative is that the USB outputs are on the back of the unit. I use a short USB extension cable. This prevents the unifying remote from being obstructed by the KVM. If running the HDMI cable end is a deterrent, look for a cable with a mini HDMI connector on one side. Finding an active one with that feature may be hard, but it would make it easier to run. Maybe not as easy as unterminated Cat6 as @jonathanm mentioned. (You'd need an adapter cable on the end to get back to full HDMI size, which might negatively impact your total cable length if you are near the limit.) Good luck with your remotely located VM! That's my complete brain dump on the topic. (#ssdindex - Options for Remotely Located Keyboard/Mouse/Monitor)
  12. SSDs can serve many needs. The decision rests with what uses you have planned. Below is an indepth coverage of the topic that I hope to use to direct others that ask this rather common question: 1 - Cache was originally conceived as a landing zone for files, e.g., from a workstation. When this feature was added, it was not uncommon for writes to be as slow as 8-10 MB/sec. These transfers were very lengthy, and people were annoyed they had to wait seemingly forever for a digital movie to transfer, for example. (Overnight unRAID moved these files to the array - which was very slow - but since you were sleeping it doesn't matter). But things have changed. unRAID writes are now often much faster - 40-60 MB/sec+. And there is a new write feature called "reconstruct writes" that virtually eliminates the write penalty at the expense of spinning up the entire array. When using cache as a landing zone, files added to cache are NOT protected by parity. So adding redundancy (i.e., a RAID1 cache), files were protected when they hit the cache. But like I said, I suggest turning OFF caching on all shares, and using reconstruct write mode when you have heavy writes to do that you want to finish quickly. That will more than saturate a gigabit LAN, and works quite well. 2 - Work area for downloads. Some of our users download media from various sources. Those downloads often require checking / correcting errors (e.g., PAR processing) and decompressing (e.g., RAR). The cache drive can act as the location to improve performance. Using cache would make these operations quite fast. Problem is this is very high I/O - and SSDs lifespans are tied to the number of writes. So the question is - how much is it inconveniencing you to have these operations take their own sweet time? Only you can decide, but many users opt to not use cache for this. The alternative to use an array disk is pretty painful. It often involves writes multiple write streams which slug down array writes even further. If this is a high volume it can lag very far behind downloads. An option is to use an unassigned device (UD), which is a disk running outside the array. Unrestrained by parity, the I/O is much faster. In many cases a UD spinner can keep up with downloads, but it depends on the DL speed. If it is too slow, a second SSD - maybe an older one you're not using for anything else, and if you wear it out it is not a big issue. Or you can use the cache drive, understanding the pros and cons this is perfectly fine. I don't think this requires to be backed up, so no need for RAID1 IMO. 3 - VM images. VM images are a primary use case for the cache drive IMO. It is basically your "C" drive inside a Windows VM. Just as in a bare metal computer, booting and running from an SSD makes a huge difference. Adding a second SSD and creating a RAID0 array can make the access even faster. Depending our your system, this may or may not be meaningful. NVMe drives are also faster and running them in a RAID configuration raises performance further. I recommend storing VM images on an SSD/NVMe cache drive. Writes to the VM image are typically not excessive. If you are doing high I/O operations inside the VM, you can always use array or UD devices for that I/O, and not have to write all to your C: drive. (I found a way to move the "TEMP" directory off my C: drive and hence off cache to eliminate these often high volume writes. Trickier than I thought, but it works. Ask if there is interest and I can explain how to do this.) RAID1 can be useful I suppose for backup (or RAID10 for both speed and redundancy). I personally don't. I just backup the VM image from time to time to the array. And if it ever crashes, I can go back to that. Virtually all of my data lives on the array or UDs so only newly installed apps (few and far between) and maybe OS/antivirus updates are lacking, and they take care of themselves). 4 - Appdata. This is another primary use case for cache. An extremely popular unRAID feature is dockers. Dockers typically store their configuration data to an "appdata" share. Having this data on a fast drive makes the Dockers faster, and most users opt to do this. The space consumed is typically small EXCEPT for media apps like Plex (which I will cover separately below). I recommend putting this on the cache drive, although you can put on a UD (spinner or SSD) or even on an array disk with little impact for most Dockers. Different Docker config data can be put in different places, but most just use the cache appdata share. RAID0 for this makes little difference IMO. RAID1 would provide redundancy and help overcome a failure. But typically appdata can be reconstructed with only a minor inconvenience, esp. if you have a backup of the config files for the Dockers. I don't consider RAID1 to be required here personally. 5 - Big appdata (e.g., Plex). Plex stores its metadata library in its config directory. This can get quite large. And be made up of a huge volume of tiny files. It is quite annoying actually. (Try to copy it, or even get a size of it, sometime). If there is a use can to move a docker's appdata to a different disk, Plex and similar apps are the prime candidates. An SSD does substantially speed up media browsing, so it definitely deserves to be on an SSD IMO. So most people just keep it in their appdata share on the cache drive and grumble a little. This is an area I would like redundancy. I tweak my metadata (posters, tags, sort titles, etc.) to my preferences - and losing that and reconstructing would be a PITA. (But whether it is worth 2x the cost for 2 SSDs, I'm not sure. I'd rather just be able to back it up easily.) I'm actively exploring creating a loopback device on the cache disk for this which would make backing it up quite easy. Don't think anyone has done this yet. I am actually moving my array to a new server and redoing all of my Dockers and settings, and will start populating to a loopback device. I will be doing the metadata rebuild this one time and endure the PITA. It will allow me to restructure which I've wanted to do for a while. 6 - Working disks from a VM. While the VM disk itself is on the cache, while working inside the VM you can write to any array or UD on the server. You can even map drive letters to different drives / shares on the server. (Remember everything on the server is local to the VM, and despite looking like they are network drives, they are very near native speed). I have a UD spinner dedicated to my VM for day to day stuff that gets a drive letter. And have another letter for an array location for personal files that I want stored on the array. I also have one to my second SSD, which I use to store VMware disk images that I use inside the KVM VM (yes that does work with one setting change). This is a big comparably slow SSD but gives very good performance to my VMware VMs that I've had for years. Hope this helps. Encourage others to post their thoughts on optimal use of SSDs in unRAID. Most of this is based on my experience, and I'm sure others will disagree or have other suggestions. (#ssdindex - Using SSDs in your array)
  13. The work by bubbaq adapted to the plugin result in the drive naming (model/serial number) being set correctly so a disk mounted on the Areca is named exactly the same as if it were plugged into any other card or 'motherboard port. But getting the temperature is dependent on getting a valid smart report. I do believe there is a way do configure that for Areca drives in the dynamix web guI. And hopefully if you do configure it, temps will show correctly. @bonienl may be able to confirm and point you in the right direction. By the way, the ARC-1280 is a PCIe 1.1 card in an x8 package. You can hook up 7 (maybe 8 ) spinners and get good performance in parity checks, but adding more drives is going to start constraining. (The I/O is only constrained when all disks are running in parallel, so adding UD devices to the controller beyond the 7 is fine.) But true 24 drive operation is a pipe dream unless your use case is only use a few at a time. The Areca cards do support creating a RAID0 parity, witch you can do nicely with the 1280 card. Such a parity made up of say 2 4T 7200 RPM drives would provide a very fast 8T parity drive, capable of over 330 MB/sec speed which can help with write speeds to the array. A PCIe 1.x card is half the bandwidth of a PCIe 2.0 card. A PCIe 2.0 x8 card would therefore support 2x as many spinners (14 or 15) running in parallel. The best choices for a 16 port card are the LSI.SAS9201-16i and the LSI SAS9201-16e. The -16e is quite reasonably priced but is set up for externally mounted drives. The cables could be routed inside the case and used for internal drives, but its cables use a totally different connector than most SAS cards. There is an Areca card (1203-8i) that is a 2.0 card with an x4 connector. Supports 7 our 8 spinners in an x4 slot. And it supports RAID0 parity. I have one of these and quite happy with it. Somewhat outrageously priced, but I found one on eBay for $100 and went for it. PCI 3.0 cards are double the bandwidth of 2.0, meaning in theory an x8 card could support about 30 spinning drives. Don't know of a controller at anywhere near a reasonable price that goes that high, although there is the SAS9205-24i might be one to watch for if you can find it. Cheers! (#ssdindex - Areca / PCIe bandwidth)
  14. As Johnny said, in normal use, either a correcting check or a non-correcting check will find the same zero errors. It is only after a hard shutdown that there is a legit reason for a parity sync issue, and in such a case, running a correcting check first world avoid a second parity check. But if you ever suspect a disk may have been corrupted, you don't want to run a correcting check. Reasons you might suspect include falling drive attributes, log entries pointing to drive problems, failing memory chip (which has been corrected), running a drive outside the array (for example, to run a recovery process on a different platform). In other words, if you are doubting a disk and want to run a parity check, do not run a correcting one. An uncorrecting check would provide peace of mind if it comes back clean, or of not, at least give you a sense of the scope of the corruption as you consider what to do. If you had corruption, you'd really want pre computed checksums (e.g. md5 or BTRFS scrub) to compare to the current disk data. And/or data backups. Without them, you might consider pulling the suspect disk from unRaid. UnRaid will simulate the removed disk using parity and the other drives. And you can mount the actual removed disk as a UD. You would then have both the real and emulated disks online at the same time and can compute / compare checksums to see if corruption (differences) exist, and if they do, try to figure out which is correct (which may not be easy or even possible without a source of truth). After the analysis you'd have more data to point to the next step. I would like to point out that a parity check will never tell you what disk is the cause of a sync error. With dual parity, it was hoped that the two parities might be able to triangulate and tell you what disk changed, but that is not possible today (whether that is theoretically possible in the future, I am not sure). So if you ever have a random sync error, with no data disk to suspect you really have no unRaid help to know if it is parity or a data disk. We tend to assume parity, as filesystems are rugidized with heavy commercial use. As mentioned above, the best tool to really figure it out are checksums that you'd have to be maintaining before the issue and represent truth. And/or backups. Otherwise you'd be stuck doing the analysis above on each and every data disk (you'd have to know what you're doing unRaid wise, and keep the array strictly read only). That or just accepting the possible corruption by running a correcting check. I'll point out that leaving a known parity sync error in place is not a good idea. If any drive were to fail and you did a rebuild, it could be corrupted (maybe subtly, maybe in unused parts of the disk, but it would not be the mirror image of the original). A correcting check would fix that, even if it means accepting whatever corruption has already occurred. Summary ... UnRaid does an extremely good job of maintaining parity. But if hardware, user error, or unexplained random events cause sync errors, with no hard shutdown involved, your best line of defense against data corruption are checksums from a point you know the data was all valid. And then going to a backup to recover the bad file(s) (or re-obtaining the file in question in another way). Otherwise you are in for a very frustrating experience to try and figure it out. And in the end may have to settle for knowing you have (or may have) corruption somewhere but having no way to know for sure or figure out the affected disk or files. And deleting every possible file that could be corrupted means losing too much valuable data! (#ssdindex - parity sync errors)
  15. You might say reconstruct mode is slightly safer. It is literally rebuilding pieces of parity with every copy. If parity were corrupted, this would put it right for the sectors occupied for the newly copied files. Normal writes would perpetuate the parity corruption. (Don't want to overstate this, unRaid does an outstanding job of maintaining parity and the chances of it getting out of sync are very low and very probably point to a user error or hardware issue.) Interesting thought ... If you enabled reconstruct mode on a new array before any disks were even formatted, and then did a massive copy filling the largest array disk with reconstruct mode enabled, parity would be inherently built (and any sectors that weren't, wouldn't negatively affect a drive re build and could be corrected by an ensuing parity check. ) Very interesting ... (#ssdindex)
  16. The root cause for problems like you are having normally trace back to cabling issues. I very highly recommend hot-swap style cages like the SuperMicro CSE-M35T-1B. Once installed and burned in, you can swap disks without risking nudging cables and creating the types of instability and risks of data loss you are experiencing. Dual parity may help recover in some situations, but is not a substitute to these cages which actually avoid the problems.from happening in the first place. I've been here a long time, and was skeptical of the need for these types of cages. I remember writing several posts debating others suggesting these cages. But I use them now on all my servers. Before I had to be prepared to spend several hours running tests after a drive addition or exchange, to troubleshoot and ensure my drives were staying online and resolving correction issues. Several frustrating times I found myself unplugging all the drives, and connecting them one at a time, to try to isolate squirrely issues. All this drama is over. I can now swap a drive out in less than 5 minutes and know that there are no side effects. I can even pull every disk from the server, move the server to a new location, and reinsert all the drives - in 15-20 mins, without any issues. Priceless! Check eBay as the cages I referenced above are often available used at less than half of new. Most of my cages are used and they work great. This model is engineered extremely well.and I recommend them. But there are other brands and you can decide for yourself. (#ssdindex)
  17. I believe limiting shares to specific disks is a more organized way to manage user shares. Others like the idea of viewing unRAID as "one big disk", in which case you can have all of your drives containing all of your shares which also works, but I believe has some disadvantages. The major one being, in the event of a catastrophe in which multiple drives are lost, having specific disks contain specific shares will very frequently result in the surviving data being complete and useful. Otherwise, you could have only partial data from several shares, which my be useless in its partial state. And you might not even know if you have it all or not (which can be especially frustrating). Depending on the type of files, this might pertain more to the data within a share being on a single disk, then the entire share being on a single disk. For example, you might rather have all of the episodes of show A survive, and lose the episodes from show B, than have some episodes from show A and some episodes from show B survive, but neither being complete. Unless all of your data in a share is in a very similar structure, and you have split level set up correctly, consider the allocation method ("fill up", "most fee", high water") carefully. "Fill up" or "high water" will work better to keep related files (which are typically created at or near the same time) together on the same physical disk. Avoid "most free" in most cases, as if you have a bunch of disks in a share that have the same amount of free space, "most free" will tend to fan out larger files (higher risk of separating related files onto different disks), similar to dealing from a deck of cards. "High water" will deal half the deck to each disk, then go back to the first disk and deal half of the rest (1/4 of the deck), and go around all the disks again, then 1/8, etc., etc. So as the disks gets fuller and fuller, the risk of files that you'd prefer to stay on the same physical getting fanned out gets higher. I actually prefer "fill up". The disk will fill until it passes the min free boundary, at which time unRAID moves on to the next disk. So it's like dealing 99.9% of the deck and keeping just a few cards in reserve, and then moving on. Generally fill up does a good job of keeping files together regardless of the split level, although setting split level is important from something like TV shows which can come in incrementally over a long period of time. I have never seen the fascination of filling the disks in a uniform way. I prefer to have each new disk fill in turn and do my best to keep related files together. (#ssdindex)
  18. @shEiD @johnnie.black @itimpi Oh how we love to be comforted! While it is true that the mathematics show you are protected from two failures, drives don't study mathematics. And they don't die like light bulbs. In the throes of death they can do nasty things, and those nasty things can pollute parity. And if it pollutes one parity, it pollutes both parties. So even saying single parity protects against one failure is not always so, but let's say it protects against 98% of them. Now the chances of a second failure are astronomically smaller than a single failure. And it does not protect in the 2% that even a single failure isn't protected, and that 2% may dwarf the percentage of failures dual parity is going to rescue. I did an analysis a while back - the chances of dual parity being needed in a 20 disk array is about the same as the risk of a house fire. And that was with some very pessimistic failure rate estimates. Now RAID5 is different. First, RAID5 is much faster to kick a drive that does not respond in a tight time tolerance than unRaid (which only kicks a disk in a write failure). And second, if RAID5 kicks a second drive, ALL THE DATA in the entire array is lost. With no recovery possible expect backups. And it takes the array offline - a major issue for commercial enterprises that depend on these arrays to support their businesses. With unRaid the exposure is less, only affecting the two disks that "failed", and still leaving open other disk recovery methods that are very effective in practice. And typically our media servers going down is not a huge economic event. Bottom line - you need backups. Dual parity is not a substitute. Don't be sucked into the myth that you are fully protected from any two disk failures. Or that you can use the arguments for RAID6 over RAID5 to decide if dual parity is warranted in your array. A single disk backup of the size of a dual parity disk might provide far more value than using it for dual parity! And dual parity only starts to make sense with arrays containing disk counts in the high teens or twenties. (@ssdindex)
  19. @TODDLT Its a little confusing the interplay between the PCIe spec (1.x, 2.0, 3.0), and the number of lanes (x1, x4, x8, x16) Each PCIe spec is 2x faster than the previous. And the lanes multiply the bandwidth associated with the PCIe spec. The bus and the card negotiate a "spec" to be the lower of the two. So a slot that is PCIe 2.0 and a card that is PCIe 1.0, will run a PCIe 1.x speeds. If you have a PCie 2.0 x1 slot, and a PCIe 2.0 SATA x1 Controller, you'd be able to achieve about 200MB/sec per drive (total of 400 MB/sec). One drive would have the full 400 MB/sec. Running 2 drives each at 200 MB/sec per drive is not a significant degradation in performance for a spinning disk. But even one fast SSD would be faster than the full 400 MB/sec provided, so I would not recommend that. A PCIe 1.1 slot is only half the bandwidth. A single spinnning drive would be fine, but 2 would be too slow IMO. No way would I hook up an SSD. A PCIe 2.0 x1 card is a bit hard to find. I found some Marvell chip versions (which can cause other problems and I would not heartily recommend). Don't know that you'd find one suitable. I believe @johnnie.black's comments above were based on a 1.x controller card, which would limit your 2.0 slot to 1.x speed. (#ssdindex - PCIe Speed)
  20. Great job both in terms of videography, pacing, and content! Truly outstanding!! Few questions tangential to the technical content: 1 - many users are using Dockers for downloading as well as Plex, with Plex being pretty resource intensive at certain times. And most people would have 4 cores not 8. And 16G our RAM is probably most common. For a user that wants a basic Windows VM (non-gaming) how would you recommend provisioning CPU and RAM? Is there a minimum recommended Windows config that won't slow down Plex? 2 - I've always thought that splitting a core between host and VM could be a good thing. For example, if you have a VM with 1 thread from each of two cores, and unRaid owned the others, unRaid would still have access to all of the cores for transcoding, a good thing if the Windows VM is often idle. Why the recommendation to pin complete cores to VMs and not share them, in essence taking them out of the game even if lightly used much of the time. Thanks again for this and your other videos! Great resources for unRaid users!! I plan to use this one and the one on online backups in the next few weeks, after completing my current drive upgrade cycle. (#ssdindex - see first post in thread) Thanks again!