Jump to content

JonathanM

Moderators
  • Posts

    16,169
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Once you wrap your head around how all the array and cache disks interact to form the user shares, you can safely move files from disk to disk to get your least used files consolidated onto drives by themselves. It's not super complicated, but until you understand it, don't start playing around with the individual disks, as you can cause anything from hiding files from yourself to outright irretrievable data loss. This thread is one place to start learning if you wish. https://forums.unraid.net/topic/42152-disk-vs-user-share/
  2. Primarily the support system. LSIO is much more active.
  3. See if this page answers your questions. Particularly the section at the bottom. https://wiki.unraid.net/UnRAID_6/Changing_The_Flash_Device
  4. Nope. Referring to how the folders on the individual disks get fused into the user share system.
  5. You can, but it's a little tedious. Do you have a good handle on what exactly a user share is with respect to the individual disks?
  6. Yes, by a second or two. If that latency bothers you, by all means, keep them spun up. That may sound smart alec, but it's truly not. The possible benefit of spinning them down is miniscule based on a random all day usage pattern. The true beauty of unraid is the ability to keep like files on individual disks, so some can stay inactive. I have a few disks which stay spun up almost all the time, and several which can stay spun down for days or weeks at a time. Some stuff you just don't need to access regularly, but you want available at a moments notice vs. in cold storage.
  7. Depends on your use pattern. If you access all your drives many times a day, leave them spun up, or set a very long spin down time. I would not use a spin down delay that resulted in more than 2 spin down cycles a day. Keep in mind the mover schedule.
  8. Since it seems you may be sticking around after all, would you mind changing the title of your first post in the topic to something less dramatic? Maybe, "Why I decided to give Unraid a try" or something like that.
  9. I'm sorry I didn't see your post for what it was, a genuine effort to help. Most of the time when a first time poster jumps in to a thread with a bunch of links to external websites, it's a spammer, only out to further their own agenda. When you declined to answer any of my questions and instead got defensive, I took that as confirmation of your intent to help yourself instead of helping others.
  10. Depends on how the specific share is configured. Shares can be configured to Cache: Only (write to cache and stay there) Cache: Yes (write to cache and move to array on schedule, write directly to array if cache free space is below specified minimum) Cache: No (write to array and stay there) Cache: Prefer (write to cache and overflow to array when below minimum space, and move back to cache if there is free space when mover is scheduled)
  11. Just carrying forward the tone of the post I was replying to. I replied to the original post with a little education about who we are, and what is the purpose of these forums. I gave the benefit of the doubt that the person was just googling around for similar situations, and posted here thinking we were a general tech support area. The reply was less than cordial and did not address my question at all, so I replied in kind.
  12. Then there was no need to post here.
  13. I wasn't ranting, I was asking why you were posting on an unraid forum. Do you use unraid?
  14. They are not readable without extra software in windows. If you had attached them to virtually any linux box, they would show up just fine. This will give you read only access in windows. https://www.paragon-software.com/us/home/linuxfs-windows/#
  15. Pick a reiserfs drive that currently has the least data on it. Copy all the data on that drive to any other drives on the array with space. When all the data on that drive has been safely copied and verified, stop the array, click on the disk you just finished copying off of, and change the format type to xfs. When you start the array, verify that the only unmountable drive is the one you just changed, and select the option to format it. Now that you have an empty xfs drive, pick the next target reiserfs drive and copy everything to the xfs drive, stop the array, change the format of the reiserfs drive you just copied from, etc, etc. I would recommend copying instead of moving, as one of the biggest performance issues with a reiserfs drive is modifying files on it, including deleting with a move. Formatting the reiserfs drive is a MUCH faster way of deleting all the files at once. If you don't have 4TB total free, then yes, you will need to purchase new drives to start the process. It sounds like you have a handle on this, but I'll repeat it here for emphasis sake. You can NOT change the file system type with a drive rebuild from parity. Rebuilds only ever recreate the file system as a whole, if you try to change it, you will be greeted with an unmountable drive and be asked to format it. You will then face a blank freshly formatted drive and your data will be gone. File system conversion must be done by copying the data elsewhere before changing the file system type.
  16. This implies that you may have openings not sealed. ALL case openings should either flow incoming air over the drives, or have fans actively pushing air out. Any extraneous openings should be taped over, clear packing tape on the inside does a good job if you care how it looks. If you leave a passive opening, air will flow through the path of least resistance, bypassing your drives. Consumer cases just aren't built for server grade 24/7 heat management, you have to be extra vigilant when setting up a server in a consumer grade case. Also, be sure any disk controller cards have forced circulation. In a server case, it's designed to push air over all the slots. Consumer cases often leave a stagnant area extending from the bottom of the graphics card slot to the bottom of the case, since the only card that most consumers use that need extra cooling is the video card. Mount an extra fan internal to the case if needed.
  17. What does that have to do with a KVM VM hosted on Unraid? This is not a generic tech support forum, it is for users of unraid.
  18. That's exactly how many (most) people are currently using unraid. The spinning rust is in the parity array, the SSD devices are in the cache pool.
  19. Yes, the primary reason for parity checks is to confirm that the array is capable of reconstructing a failed disk accurately. That includes both the concept of mathematical accuracy as well as disk reliability. In a "normal" RAID setup, all disks are spinning all the time and pretty much participating equally to some extent. In unraid, however, it's perfectly plausible to have disks that are NEVER accessed during day to day activities, due to them not containing any data that someone needs currently. If one of those drives fails, you wouldn't know it until it was too late, when you were trying to reconstruct a different failed drive. Parity checks provide a way to keep up with the health of those seldom used drives.
  20. It was an idea anyway. My thought process was that even though the CPU may not be vulnerable, the mitigations would still be applied in the code regardless. Honestly I don't know enough low level coding to be able to figure it out for myself, so I just wanted to advance the theory. All these issues seemed to start popping up at roughly the same time frame, so it's tough to distinguish what may or may not be truly causal, or just coincidental.
  21. Yes, the disk could fail even though it's not being used. This would manifest by a read failure, where the disk would report it couldn't return the data from that address. The chances of that happening are vanishingly slim. Much more likely for it to give an error when trying to read the 0 that was placed there.
  22. You can't. Cause it doesn't exist that I know of. However... https://linuxize.com/post/how-to-use-linux-screen/
  23. So toggling the mitigations doesn't change anything?
  24. Could you please toggle this plugin and check status with all mitigations enabled and disabled?
×
×
  • Create New...