Lev

Members
  • Posts

    369
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Lev

  1. My bad, I really should of included a screenshot in the OP. Thanks for the reply @bonienl
  2. With a large number of unassigned disks, the list in the pull-down menu when assigning them can be quite long. There does not appear to be any logical order to how the disks are listed. Any logical order would be an improvement. Unless there's a better sort method preferred by the developers, seems like the list would be in ascending order of the disk letter, example: sdb, sdc, sdd... @bonienl @Squid just FYI if this is something to add into the on-going GUI improvements.
  3. I build this big. My advice... Don't trouble yourself with those items you listed you want to reuse. Build for where you going, not where you've been. Write speed becomes an issue to fill it in a reasonable amount of time if all drives are empty. Look at some of my previous post and you might find some ideas on how to speed it up. You already spent a lot of money on drives, now is not the time to be cheap. Finish the job. Let's see some pics
  4. Update got it working. It was as I thought "something painfully simple"... It works as you'd expect. What caught me up was having a existing queue in NZBget. I now know that each of those items in the queue, are set to the paths configured in NZBget at the time they are added to the queue. So with my existing queue, it wasn't until it got caught up to when I made the path changes changes that I saw log messages of those new downloads trying to hit my container path mapping for /InterDir/ (nzbget.conf) that was mapped to /tmp (unRAID Docker container path settings for NZBget container) Thanks @neilt0 this thread continues to deliver over 4 years later since your OP! @trurl thanks for helping me keep my sanity that what I was doing was doing all along was correct.
  5. So far I've killed one SSD every 1.5 years, and the cheap ones die. It's not the cost, like you said they are cheap, but ugh I'd rather be spending my time on so many other projects than replacing them. RAM maybe my answer.
  6. Yes, I was thinking the same thing, but so far have failed miserably in my attempts to try it. I've tried multiple different ways mounting /tmp or tmpfs, that part works, best I can tell. I'm able to successful from the bash shell within the NZBget container see the mapped mount point and even create and edit files, and see them back on host. A+, I'm solid here. Where the trouble lies is getting the app NZBget to use that mount point. I've edited the appropriate /InterDir in the 'Paths' section of settings. I've double checked the nzbget.conf file to ensure it matches, but no matter what, NZB get ignores it and falls back to $MainDir/intermediate I've yet to enable debug logging for NZBget, but that's where I'll look next. I expect it must be some permission problem with the mount and tmpfs as a device type. All I know is it shouldn't be this painful, I must be missing something painful simple.
  7. Maybe I'm asking in the wrong way, or I'm missing something cause here's what I'm observing having tested this for the last hour and getting a bit frustrated that I searched and foudn your thread Article cache based on what I'm observing keeps all of these individual of a RAR in RAM, like you have in your example here, all of these guys are in RAM. Only once all of the pieces of a RAR have been downloaded, it moved out of the article cache (RAM) and written to disk (/InterDir) as you show in your written into the complete single RAR file, just like you explained here: What I'm trying to do is keep all those completed RAR files in RAM rather than written to the /InterDir disk, therefore I'm trying to make InterDir be a ramdisk. I think this is the next logical step beyond what you were doing in 2013 (glad you're still here!) using temp (tmp). You're right that article cache solves the problem you were curious about, however based on my tests it does not also apply to keeping the completed RAR's in memory as well. I'm still observing those written to /InterDir, until they are all downloaded and finally unpacked and moved to /DestDir. Does this align with what you know? Expect to be called crazy for wanting this, as it means gigabytes of RAR's stored in memory that could easily be lost and have to redownload again in the event of a server reboot.
  8. I like this old thread, shows how far things have progressed such that we have so much RAM to ask the question... what if we never want to write the RAR file to SSD or Disk, we want it to remain in RAM, such that the only thing ever written to SSD or Disk is the final contents of the RAR?
  9. Nice! thanks for reporting back! It'll help others in the future who are curious about if anyone has tried the card.
  10. Still have more of these chassis for sale! Since I'm in the holiday spirit (and I want to free up rack space!), price reduced between now and Dec 25th, 2017 to $350.00, local pickup preferred.
  11. Rails, Fan Wall, and since we live in wonderful California, no sales tax!
  12. @gmac13 @richardsim7 Answer found to how to bring back up the file transfer status window in Krusader once you've clicked away.
  13. If you know of a better place to post it, let me know. I'll post a link back to it from the Sparky-Repo thread. I found this thread by searching to see a answer existed. If someone ever takes over the Sparky Krusader to update it, this was one of two possible solutions to the problem, the one I posted is the easiest and requires no changes to the container. The other solution (assuming it works) that I first started going down the path of trying to build wmctrl inside the container. Seems like wmctrl would make it possible to add a button inside the container on the Krusader GUI and Menu, but it requires some additional work.
  14. @tillkrueger @jenskolson @trurl @unrateable @jonathanm @1812 @Squid since all of you were active in this thread. I found a way to get the file transfer back. Bring up the Guacamole left panel menu (CTRL ALT SHIFT) Input Method = On Screen Keyboard In the On Screen Keyboard, use ALT (it'll stay on, 'pressed') then TAB, select it using TAB, then ALT again (to turn off) A tip I found too, is that anytime doing a copy or move, always best to use the 'queue' button in the pop-up confirmation dialog so that multiple transfers are sequentially handled. It's easy to get to the queue, I found using this it often mitigates much of my need to see the file transfer progress window. The 'Queue Manager' is easy to get back on the screen by using the top menu, Tools > Queue Manager
  15. This is 3 minute mock up I cooked up in MS Paint while at my in-laws away from my graphic workstation. It's just convey the concept and if it's of interest, then I can take it further with some more concepts. Vector is easy as you can see, and Adobe Illustrator is also easy. It's the logo build of blocks of different sizes to represent the different hard drive sizes, one of unRAID's long-time core features: I got a bit lazy one I got to the 'AID' letters in terms of the blocks
  16. Wow this is big idea. You have me thinking of an array where each disk in the array points to a iSCSI target that is another unRAID server. OMG fun!
  17. Thanks, I did some research on this. Ok first the problem... First I attached a picture below to illustrate the problem. Warning is set to 99%, Critical 100% using a 8TB drive as an example. 1% seems to be equal to roughly 40GB in size based on how unRAID is calculating it. The GUI then recieves this 'critical' state and shows red. 40GB doesn't seem like that big of a deal as a single drive, but when I multiply this across all my drives, potentially the unRAID max drive limit (28? 30? I forget) Let's just use 28 for the math... 28 * 40GB = 1.1TB of free space remaining, yet I'd be in critical state. That sounds like a lot, but if I had 28 drives at 8TB in size, that means I'd have 224TB of total space. Am I really going to care about only having 223TB out of 224TB due to this loss? I think the decimal place is still a good idea. It's curious that it's not a setting in the /config/disk.cfg, how does this work @limetech ?
  18. @limetech I tested two area's where they may be action that can be taken to remedy. I also noted one related new issue. Two possible actions: (1) /config/smart-one.cfg - Seems viable to add disks by name. Would need change to allow disk name to be handled differently than today. Presently it works on [array disk] and would need to be updated to allow [array disk or disk (ex: sdX)] (2) Global Disk Settings - I saw @bonienl post regarding Global Disk Settings and found that yes this seems viable, however selecting a Default'Smart Controller Type' = 'Areca' may have a defect. The GUI does not then dynamically show the fields needed to define it (since those settings are really at the device level, so this seems to be correct function) However, once I press 'Apply', the GUI refreshes and the selection switch from 'Areca' to 'Automatic' Related new issue While my work-around as detailed previously for my [parity] disk works correctly, only once the array is started. When the array is not started, the logs are filled with the Sense error again for sde, which is my [parity] disk. Seems that the 'Smart Controller Type' is not applied at the per disk level for array disks until once the array has been started. Seeing all these items makes me wonder if the real solution is to apply this at the disk level in /config/disks.cfg, but I don't know. Before I proceed any further, what are your thoughts?
  19. @limetech Honestly wasn't going to ask since. But... ha... you're question inspired to research for the last 30 minutes... Possible solution found. PROBLEM: The workaround is limiting, as what I need is to define the 'Smart Controller Type' for any disk that is not assigned to the Array. Since disks that are unassigned are not visible in the GUI, there is no way to assign them in the same manner that unRAID supports now. This limits the workaround to only working for Parity or Cache disks, but not for unassigned disks. Possible Solution: UD Plug-In Next seemed like the logical step would be to see if @dlandon could add 'Smart Controller Type' as the UD plug-in has the same GUI device details settings. No luck in his reply, I understand. Possible Actions: unRAID GUI - No. Not sure where an action could be taken in the GUI for sure, because I don't see an easy way to define on a per disk basis, what 'Smart Controller Type' is to a disk that is not in the array. flash/config/smart-one.cfg - Yes? this might be possible for action. Given what I found, this is where 'Smart Controller Type' is stored. Right now I have this: [parity] smType="-d areca" smPort1="10" Disks seem to use [Disk Would it be possible to add this? [sdX] smType="-d areca" smPort1="10" In my example it would be [sdc] as that is my unassigned disk. I'm testing this now to see how it behaves. Will report back if it already works or not. [sdb] smType="-d areca" smPort1="10"
  20. It's middle of the day and I''ve reached my 'Like' quota. Can you please increase this? Not sure what problem this quota limit is trying to solve. Don't care. I'd recommend make it 5x what it is now.
  21. As the title says... What happens is that while reading the backlog of 'unread content', there will be new content posted on the forums before getting to the bottom of the unread content backlog. When pressing 'Mark site read' it marks everything as of then as 'read' including new posts. Any new posts between the time I started reading 'unread content' and when I press 'mark site read' is marked read. Confused? So am I LOL. Anyway, easy workaround for this, so not a big deal. Workaround: when finished reading 'unread content', simply repeat again, and read the delta of new posts, and then hit 'mark site read'.
  22. @gridrunner THANK YOU!!!! Yet another amazing video. This one is going to be HUGE!
  23. Thanks, glad you're thinking about it. My problem is too many NICs and trying to see them all easily at a glance.