Jump to content

Lev

Members
  • Posts

    369
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by Lev

  1. 8 minutes ago, jonathanm said:

    I have a CP1500PFCLCD connected via UPS type USB and UPS cable USB with a blank device string and it works fine. The slaved instances of apcupsd on my VM's and second unraid server are connected via UPS type NET and UPS cable Ether, and a device string of <masterserver>:3551

     

    As long as you get apcupsd to successfully read power status and count minutes without power on the master, it doesn't really matter how the ups communicates.

     

    Thanks! this makes me want to give it another try someday with the usb cable and see if I get results like yours. I've only had luck using NET with the optional remote access add on card Cyberpower offers. 

     

  2. Great decision to add a UPS!

     

    I also think cyberpower products and their customer support are excellent!

     

    However the apc library in linux and thus included in unraid doesn't marry well to the point of your question. I'd advise you stick to tried and proceed compatibility recommendations from others. 

    • Like 1
  3. With a large number of unassigned disks, the list in the pull-down menu when assigning them can be quite long. There does not appear to be any logical order to how the disks are listed. Any logical order would be an improvement.

     

    Unless there's a better sort method preferred by the developers, seems like the list would be in ascending order of the disk letter, example: sdb, sdc, sdd...

     

    @bonienl @Squid just FYI if this is something to add into the on-going GUI improvements.

     

  4. I build this big. My advice... Don't trouble yourself with those items you listed you want to reuse. Build for where you going, not where you've been.

     

    Write speed becomes an issue to fill it in a reasonable amount of time if all drives are empty. Look at some of my previous post and you might find some ideas on how to speed it up.

     

    You already spent a lot of money on drives, now is not the time to be cheap. Finish the job.

     

    Let's see some pics :)

     

    • Like 1
  5. Update got it working. It was as I thought "something painfully simple"...

     

    It works as you'd expect. What caught me up was having a existing queue in NZBget. I now know that each of those items in the queue, are set to the paths configured in NZBget at the time they are added to the queue. So with my existing queue, it wasn't until it got caught up to when I made the path changes changes that I saw log messages of those new downloads trying to hit my container path mapping for /InterDir/ (nzbget.conf) that was mapped to /tmp (unRAID Docker container path settings for NZBget container)

     

    Thanks @neilt0 this thread continues to deliver over 4 years later since your OP!

     

    @trurl thanks for helping me keep my sanity that what I was doing was doing all along was correct.

     

     

     

  6. 4 hours ago, neilt0 said:

    I typically download 50GB + files and don't have that much RAM, so write the RARs to an SSD that's only used for this purpose. It's a 120GB drive I bought for cheap, so I don't care if it dies. Then I unrar to the array.

    My article cache is 2GB to hold 2x 1GB RARs before writing to the SSD.

     

    So far I've killed one SSD every 1.5 years, and the cheap ones die.

     

    It's not the cost, like you said they are cheap, but ugh I'd rather be spending my time on so many other projects than replacing them. RAM maybe my answer. :D

  7. 4 hours ago, trurl said:

    You should be able to map a volume for the docker to /tmp and point NZBget to it.

     

    Yes, I was thinking the same thing, but so far have failed miserably in my attempts to try it.

     

    I've tried multiple different ways mounting /tmp or tmpfs, that part works, best I can tell. I'm able to successful from the bash shell within the NZBget container see the mapped mount point and even create and edit files, and see them back on host. A+, I'm solid here.

     

    Where the trouble lies is getting the app NZBget to use that mount point. I've edited the appropriate /InterDir in the 'Paths' section of settings. I've double checked the nzbget.conf file to ensure it matches, but no matter what, NZB get ignores it and falls back to $MainDir/intermediate 

     

    I've yet to enable debug logging for NZBget, but that's where I'll look next. I expect it must be some permission problem with the mount and tmpfs as a device type. All I know is it shouldn't be this painful, I must be missing something painful simple. :S

  8. 8 minutes ago, neilt0 said:

    That's what article cache does.

     

    Maybe I'm asking in the wrong way, or I'm missing something cause here's what I'm observing having tested this for the last hour and getting a bit frustrated that I searched and foudn your thread :)

     

    Article cache based on what I'm observing keeps all of these individual of a RAR in RAM, like you have in your example here, all of these guys are in RAM.

     

    On 10/28/2013 at 2:43 PM, neilt0 said:

    temp

    UbuntuAnimalName.part001.rar.001 1.3MB

    UbuntuAnimalName.part001.rar.002 1.3MB

    UbuntuAnimalName.part001.rar.003 1.3MB

    UbuntuAnimalName.part001.rar.004 1.3MB

    ...

     

     

    Only once all of the pieces of a RAR have been downloaded, it moved out of the article cache (RAM) and written to disk (/InterDir) as you show in your written into the complete single RAR file, just like you explained here:

     

    On 10/28/2013 at 2:43 PM, neilt0 said:

    inter

    UbuntuAnimalName.part001.rar 100MB

    UbuntuAnimalName.part002.rar 100MB

    UbuntuAnimalName.part003.rar 100MB

    ...

     

    What I'm trying to do is keep all those completed RAR files in RAM rather than written to the /InterDir disk, therefore I'm trying to make InterDir be a ramdisk. I think this is the next logical step beyond what you were doing in 2013 (glad you're still here!) using temp (tmp). You're right that article cache solves the problem you were curious about, however based on my tests it does not also apply to keeping the completed RAR's in memory as well. I'm still observing those written to /InterDir, until they are all downloaded and finally unpacked and moved to /DestDir. Does this align with what you know?

     

    Expect to be called crazy for wanting this, as it means gigabytes of RAR's stored in memory that could easily be lost and have to redownload again in the event of a server reboot. :)

     

     

  9. On 6/29/2015 at 5:07 PM, neilt0 said:

    You can, but there's no point any more - using RAM to build the RAR before writing to disk is now built in as articlecache.

     

    I like this old thread, shows how far things have progressed such that we have so much RAM to ask the question... what if we never want to write the RAR file to SSD or Disk, we want it to remain in RAM, such that the only thing ever written to SSD or Disk is the final contents of the RAR?

     

     

  10. 2 hours ago, michael123 said:

     

    Yes! Adaptec 1000-16i works like a charm in 6.3.5

    No firmware upgraded needed

    No firmware reflash needed

    Just connect the drives and stick it into your PCI slot 

     

    Nice! thanks for reporting back! It'll help others in the future who are curious about if anyone has tried the card.

    • Like 1
  11. On 4/21/2017 at 3:30 PM, gmac13 said:

    Hi. I am very new to Unraid so please be gentle.. if its not possible to bring up the status display in Krusader, (once closed) is there anyway in the terminal or somewhere to see what is being transferred, ei: if i open Krusader on one computer. start transfer, then go to another computer knowing whats happening in the background..

    Cheers..

     

    @gmac13 @richardsim7

     

    Answer found to how to bring back up the file transfer status window in Krusader once you've clicked away.

     

     

    • Like 1
  12. 2 hours ago, Squid said:

    Wow.   Never even knew about the Guac menu.   Now, what do we do with this post as this question comes up a fair amount of time?

     

    If you know of a better place to post it, let me know. I'll post a link back to it from the Sparky-Repo thread. I found this thread by searching to see a answer existed.

     

    If someone ever takes over the Sparky Krusader to update it, this was one of two possible solutions to the problem, the one I posted is the easiest and requires no changes to the container. The other solution (assuming it works) that I first started going down the path of trying to build wmctrl inside the container. Seems like wmctrl would make it possible to add a button inside the container on the Krusader GUI and Menu, but it requires some additional work.

    • Upvote 1
  13. @tillkrueger @jenskolson @trurl @unrateable @jonathanm @1812 @Squid since all of you were active in this thread.

     

    I found a way to get the file transfer back.

    1. Bring up the Guacamole left panel menu (CTRL ALT SHIFT)
    2. Input Method = On Screen Keyboard
    3. In the On Screen Keyboard, use ALT (it'll stay on, 'pressed') then TAB, select it using TAB, then ALT again (to turn off)

    A tip I found too, is that anytime doing a copy or move, always best to use the 'queue' button in the pop-up confirmation dialog so that multiple transfers are sequentially handled. It's easy to get to the queue, I found using this it often mitigates much of my need to see the file transfer progress window.

     

    The 'Queue Manager' is easy to get back on the screen by using the top menu, Tools > Queue Manager

     

    • Like 9
    • Upvote 15
  14. This is probably the 4th time this has happened after upgrading to the latest release. Maybe it's happened more, and I just wasn't paying attention.

     

    After applying a new unRAID rc, upon the first reboot, it fails to detect the NIC. Only br0 and loopback are present when I login at the local terminal and do ifconfig. Doing a power-down and cold boot fixes it, and every reboot from then forward is fine with no issues. Seem to only be the reboot after the upgrade.

     

    Sadly I didn't get a diagnostics. I will next time. Otherwise everything else works great.

     

  15. This is 3 minute mock up I cooked up in MS Paint while at my in-laws away from my graphic workstation. It's just convey the concept and if it's of interest, then I can take it further with some more concepts. Vector is easy as you can see, and Adobe Illustrator is also easy. 

     

    It's the logo build of blocks of different sizes to represent the different hard drive sizes, one of unRAID's long-time core features:

     

     

     

     

    unRAID_Concept_Logo_v1.PNG

     

    I got a bit lazy one I got to the 'AID' letters in terms of the blocks :P

     

    • Like 1
×
×
  • Create New...