blumpy

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

blumpy's Achievements

Noob

Noob (1/14)

2

Reputation

  1. @kizer thanks for you input. I’ll give you the tl;dr. I have two main storage needs: 1) Sample libraries 2) project library A sample library is massive & needs maxed out ram and ultra fast storage. If I open a patch, let’s say a piano, the front of each note is loaded into ram. Some of these can be 200+mb in size each. When I play a note the rest of the sample is streamed from storage as you play. When doing 60+ layers of instrumentation/libraries you’re filling up 64+ ram. Add to that multiple workstations. So you if you pooled storage you need lots of space that’s ultra fast & cannot be moved because each library has a directory path. The projects folder will be streaming video & dozens if more a hundred uncompressed audio files. These files can sometimes point to/reference files from prior projects. So It’s best to keep all projects in the same pool wise you end you redefining hundreds of paths or maintains multiple copies of the same thing. What I need is a parity protected JBOD with NVME read cache. 40+ TB of SSD is a solution but it’s overkill since 90% of it will lay dormant. It’s also not my best solution. The best solution is an expandable 40+TB of HDD & 4TB of NVME with intelligent read cache/tier networked over 10GBE. So that’s what I started building but then quickly discovered the cache was not what I expected. It’s entirely my fault for not reading enough of the literature but I think an intelligent read cache-tier would be a very nice feature for people trying to work with media so that’s why I was suggesting it as a feature for future versions of unraid. I’m exploring solutions still. I think I’m just going to stick with local nvme storage & SATA ssd’s and just back up to the unraid system, but it generates tons of file redundancies I was hoping to avoid. I am considering a 10gbe RAID 10 using pcie cards but it sure would be nice to have nvme cache for the low latency. To do an nvme tier I’d need to run Windows server for storage spaces direct or some Linux variant. I just can’t seem to figure out a way to do this with unraid as I’d planned.
  2. Thank you for the suggestion but that is not possible. I’d have to reassign directory paths for literally thousands of files every time a project is moved back & forth or I have to copy the data to the project instead of referencing it thus creating TBs of redundant files.
  3. @trurl I wouldn't trust myself to write such a script. That said, thank you for trying to find a work around. I'm just suggesting that an ssd cache should work in the more traditional way, where file writes are cached then stored and recently read and heavily used files are stored in the cache for fast access. As others have suggested, trying to create a work around with separate folders and shares etc simply defeats the purpose of the server. I do not have the time nor the scripting prowess to confidently write such a script and trust it. I love the idea of UNRAID, but without such a feature it's going to remain a storage tank.
  4. There are several reasons to not use separate shares. When dealing with massive Sample Libraries and auditioning libraries and having to repoint location of the data is completely impractical. It would be far superior to just run off HDD locally. Perhaps I should just invest in a bunch of 2tb SSDs and leave it at that but a cache read system on NVME would be better. Other reasons: Creating multiple version of files instead of pointing to single files not in the projects pool. Having to reallocate the file locations in projects that have been moved. If I'm working on a project and want to recall a dozen files from a previous project I need to copy those files to the new project instead of simply referencing the files that already exist. Requiring multiple users to be aware which is the current version to work from instead of working from one version. it will also created dozens or more unplanned Redundant backs ups when working with projects. Being unclear which copy of a project is the latest. Preventing user error when replacing previous projects. More importantly, why only have cache that works one way? I'd imagine having a read cache would benefit most people.
  5. Thank you very much for the recommendations but moving projects in & out of various share locations defeats the purpose & I should remain on nvme storage for the speed if manual oversight is required for projects on each workstation. Working from multiple shares also creates an unplanned redundant backup of projects if someone moves old projects in for editing. It’s good to have several backups but everything becomes a lot easier if it’s all from one share/pool/space/cloud/etc which was the purpose of this build.
  6. When a User Share is set to preferred what criteria does Mover use to select the files it will move into the cache? In other words, if I have 10TB on the array but 500gb of ssd what files is Mover moving into the cache? Is it move the most recently read and written files? Is it doing it alphabetically? First level? until it's full?
  7. I would imagine that I’m not the only person that would benefit from read cache. Or perhaps I’m getting something wrong. To be able to directly work from a NAS on media.... seems like a no brainer to me. To having what would ultimately be a giant Apple Fusion JBOD with parity protection seems like it would be very useful to media creators.
  8. For media workflows it would be nice to be able to keep all of your projects, clips, samples, etc in their own user share but have all the recently used projects, clips, etc intelligently mirrored to the ssd cache for acceleration. When writing a file to user share, it is written to the cache. In the background the cache is written to the array so that the data has parity protection. Recently read files would be mirrored onto the cache for accelerated access but still stored on the device array. Reasons for this is not having to deal with local storage copies and conflicts. Projects remain in their respective user share and an automated backup system would not have redundancies because users need to manual copy data to and from SSDs to work. All that said, I'm very new to UNRAID, so perhaps this is a feature that can be implemented manually but thus far I've not found it in the forums.