-Daedalus

Members
  • Content Count

    311
  • Joined

  • Last visited

Community Reputation

41 Good

About -Daedalus

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I wouldn't mind, but I just gave a presentation last week on ZFS. You'd think I've have remembered that file level and block level are different things. I blame the beer. Thanks guys, blonde moment of the day. Hopefully the only one.
  2. Good to know. I assumed drive with no data = zeroed drive, therefore wouldn't affect parity. The wiki mentions running a clear-me script, but it doesn't mention doing anything special with the drives. I assume it adds a flag in the drive header or something as well?
  3. Hi all, Sanity check here. I removed two drives that had been cleared of data (ls -la showed 0 on each drive). I stopped the array, new config, kept parity and cache slots. Assigned everything back, removing drives 6 and 7, moving everything after slot 5 up two slots. Enter encryption passphrase, tick parity is valid, start array. All came back fine, all data is there. Run parity check, and lots of errors, as if it's recomputing everything. My understanding was if there was no data on the drives, parity shouldn't have to be rebuilt. At th
  4. +1 If it would cause too much problems, maybe each disk could be assigned an alias, keeping the original /mnt/disk mountpoint, and giving an option in the UI for displaying mountpoint name or alias.
  5. As to your first problem, this is a known issue with RC2. Manually spin up your drives (even if they're already spun up, from what I understand) by clicking the little status LED symbol on the left. The temperatures should display correctly after this.
  6. I completely forgot you can do this through the UI now. Ignore my first post.
  7. Just to be clear here: Pinned != isolated. They're different things. Pinned just means a CPU core that a VM can use, but anything else can also use that core as well if it's free. This is done by pinning the CPU core in the GUI. Isolating a CPU core means unRAID doesn't touch it. This is done by appending "isolcpus="x,y,z" to your syslinux config on Flash > Main. If you want to fully isolate the cores so that only the VM uses them, then you'll need to change your syslinux config from this: To this (for example):
  8. Valid points all. I hadn't considered the fact that no-one has written a plug-in for it yet. That likely does say something to all this. And you're right; I hadn't considered that some people use disk shares/mappings either. I guess we're at the same point: Feature request made, wait and see.
  9. I have to disagree. unRAID is being billed as an appliance that gives you enterprise features for less money, and does lots of things without requiring as much user knowledge as a home-spun setup using a more common distro. If you're saying a regular user should be totally ok with using the terminal zero a drive with dd, then you're kind of missing the point. I could actually see this saving time, in the sense that a user kicks off the drain operation, then comes back a day or two later, can restart the array at a more convenient time, and yank the drive. Ho
  10. I'm not ephigenie, but I liked the idea, so I'll give my two cents on it, for what it's worth: Maybe something like a broad strokes roadmap. Nothing too concrete, but even a post with headings like: Features being worked on for the next major/minor release (multiple cache pools) Bugs being squashed for the next major/minor release (SSD write amplification) Future possibilities currently under investigation (ZFS) You could make the forum visible to community developers only. Or if you're feeling particularly transparent, forum members with enough pos
  11. This is why I don't comment much; I manage to completely miss the obvious most of the time. Reasoning for not including drivers off the bat makes complete sense now, carry on.
  12. Fair point! I might have to pick up one of these to test with. I had no idea they existed, cheers.
  13. Without going near any of the other stuff, I'm curious to hear your thoughts on this one: Why the concern over install size? Most people are installing unRAID on 16-32GB sticks these days. Does it matter much if the install size is 400 vs 100MB? I can absolutely understand efficiency standpoint; it's a lot of space for something very niche, I'm just not sure what the downside is. The only one I can really think of is longer backup times of the flash drive, but that seems very minor. Is there something I'm missing here?
  14. Interesting. I didn't thing GPU drivers were coming this soon. I assume you didn't build in support for 3rd party modules just for that though, interested to see what comes of this down the road!