bbqninja

Members
  • Posts

    26
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

bbqninja's Achievements

Noob

Noob (1/14)

0

Reputation

  1. How this applies to unRAID? Software RAID is not hardware dependable. I really doubt that if Tom stops the unRAID development, he would keep the code closed. But again, if it happens, we always can migrate to other systems. The Unraid version that you can purchase today, and which is officially supported, does not support: -3+TB drives -64 bit kernel -SSDs (no trim support) -Newer SATA/SAS cards So I'd say he has a big point there.
  2. Heh heh... I didn't wait long enough to check the colors At my age, orange and red look pretty close {For the record -- I think like Green check-mark Balls, Red Exclamation points balls, and Orange question mark balls - makes more sense) -- eg, both colors and symbols...) I figure somebody 'braver' than I, and with a stand-by play system, can reproduce it... And they can check the colors But I think this is a REAL situation that ought to be addressed. Probably not a 5.0 specific issue -- but a real issue. IMHO. Thanks for hearing me out. Preclear in-progress (which will of course, wipe the cache drive) - and I will be fine.... Mover never ran for me. But others, well... it ought to be at least "checked" when a Cache drive is added, and some explanation given to the user. I mean, a Cache drive ought (in 99% normal situation) not have "files" ready to copy on top of an existing Unraid system. With all the critical bugs that need fixed (MVSAS issues, NFS, etc), why is this a real issue? It's a dot next to a share. It does no harm and some people use this to see if a share has files on the cache drive. As for the cache drive moving files over, it does this on your set scheduled time. A cache drive is going to move files over to the array if they are on it, that's the whole purpose of it. You wouldn't throw on a data disc on it without preclearing it right? So why any different with a cache drive you plan on adding? You saw/knew there were existing files on it, so of course it's going to move those over during the next scheduled mover run. You have plenty of time to delete the data before it moves the files, and you can always wipe the disk before adding it. I don't see the problem. Sweet, so there's an included GUI preclear in 5.0, and it prompts you to do so when you pop in a drive it doesn't recognise? This has the potential to critically destroy data without warning, and the reason people use unraid instead of ubuntu/slack/debian setting up the disks by hand is that it's an appliance, and therefore should not do "bad things" requiring manual intervention.
  3. -what drive is in it? -does the enclosure have a seperate power supply? (I'd guess not being 2.5") I've never seen ANY 2.5" enclosure be more than super-slow
  4. This won't work for user shares, but can't you add onto the samba config to change a disk mount to RO?
  5. Are there any thoughts about using an SSD as a block cache drive as opposed to a filesystem-based cache drive (like ZFS supports and BTRFS is going to be supporting soon) ? Obviously 5.1.... 5.5.... 6.0 type stuff, not 5.0 It'd just be nice to be able to put in a fast device and have things go faster without worrying about space for said cache device.
  6. 1. I agree 100% with what you've said. It should be in there. but 2. There are TONS of other linux based NAS distributions. NONE of them pre-clear. NONE of them have a huge amount of people complaining about data loss/etc.
  7. regarding the "stale nfs file handle" issue, is it perhaps that it needs an fsid per export? Since we don't really have a block device, but a crazy virtual filesystem, for user shares this makes sense.
  8. Any thoughts then? Limetech has said that it is solid and should have been named "final".
  9. I think most people reckon that would be a REALLY bad idea. Unraid should be a fully optimized storage server. Either: -Get a super-cheap seperate box (even one of those pogo-plugs would work for a tv recording server) and mount unraid with NFS or, if you MUST have only one physical box -run VMWare ESX Well, 'REALLY bad idea' is too strong to say here, I would say. Including those modules wouldn't harm so much. On the other hand there are more important priorities to focus on. My thoughts: (and I haven't looked into the unraid kernel config much) -if the unraid kernel is compiled staticly (without module support enabled) then I don't want it in the kernel potentially pushing it out of cache pages -if the unraid kernel is compiled with modules enabled, then it's trivially easy to load a slackware VM with the same kernel revision, build the modules in question, then copy them to the USB stick and load them from the go script Either way it's apparent he has limited hours to work on unraid and thus should not spread features thin. If limetech was a 30 person team, then sure, have a specialized version for everything you can think of!
  10. I think most people reckon that would be a REALLY bad idea. Unraid should be a fully optimized storage server. Either: -Get a super-cheap seperate box (even one of those pogo-plugs would work for a tv recording server) and mount unraid with NFS or, if you MUST have only one physical box -run VMWare ESX
  11. Do you have a cache drive? If so, I BELIEVE you can simply: -create a directory called "move" -place all root level directories for the user share in said directory -wait overnight -move all of the directories back to root level, delete move -wait overnight
  12. Hello, I ran your scripts, and things started to look like they were working (started backing up, etc). The next morning, though, the array had basically crashed. I was able to get in through telnet and this is what I noticed: -NFS was hosed. All network machines connected via NFS were stalled (processes waiting on sync) -waiting task count was HUGE. 150 or so? (from top/uptime) -I was out of ram (from top) -there were a few hundred (hence waiting tasks I guess) processes for the mover script -the syslog was constantly showing rsync messages (mover script) I rescued it by manually killing basically every process and then stopping the array from webconfig (array stop was unresponsive while mover was running) My guess is that the parallel rsyncs over the "slice" files (which make up the virtual HD/sparsebundle that timemachine mounts) ran me out of memory. (I have either 1 or 2 GB of ram and an AMD AM2 processor with 2 cores) Which of these should I do: -turn off cache drive for my time machine share (really don't like this for obvious reasons) -add swap via cache drive (very non-standard change, might effect other things?) -something else?
  13. backing up 600gb as we speak! One "interesting" thing is that it told me it wasn't a supported version when I tried to log in as a user (root... bad I know) but worked fine as guest.
  14. Ahh, for where to buy, I live in Australia but my parents live in the USA and are visiting soon, so they will be ordering it and bringing with.
  15. From what I read this is "THE" controller card to get. So a few questions: -is amazon the best place to get it from? -are these the right cables I will need: http://www.amazon.com/Multi-lane-Internal-SFF-8087-Serial-Breakou/dp/B000FBT47E/ref=wl_it_dp_o_npd?ie=UTF8&coliid=I3CDS5FIQGR5S&colid=1AZ9FI38S9GRF -any gotchas or quirks about the card to be aware of? -anything else I should be considering for high density storage? Thanks!