Jump to content

Neo_x

Members
  • Posts

    116
  • Joined

  • Last visited

Posts posted by Neo_x

  1. Hi Tom /Team

    10 years plus user here (struggling to find that email where i bought my pro license).
    Difficult choice you guys are facing, but a business has to remain afloat.

    a possible option, for lifetime pro users, is to donate to the limetech project. ie have a website track the upcoming expenses for the year, and showing how much of that total has been met?

    understand it is a bit of a shift, but i will 100% support your product, witha bit more visibilty on where the money goes (wnat what you are short)
    even if it is a small amount of maybe 10/20/30USD per year (user choise), i believe it will fill a few holes.

    keep up the good work!

    Regards

    • Like 1
  2. 11 hours ago, MeisterPilaf said:

     

    I could send you some results if you describe a little further. i9-12900T here.

     

    4k h264 -> 4k transcode? or 4k h265 -> 4k transcode?

     

    As far as i tested for my needs, it can handle a lot! 4 streams at once from 4k h265 -> 4k h264 transcode without any problem.

     

    Send me some infos.

     

    thank you. Typically 4k h265 HDR to 4k h264  -> don't see a reason to go lower

    Following the thread at a clance it seems tone mapping only recently started working.

     

    any detail is welcome! thank you sir

     

  3. can anyone confirm how many 4k transcodes the 12900 can handle?

    seeing mixed results online, and are looking to see if  what combination of hardware would give a cost effective amount of streams. Power usage not that important (<400watt under load should be fine)

     

    my alternativate would be to rather go nvidia gpu, but then HDR tone mapping needs to be supported

    • Like 1
  4. hi guys
    just checking if this is normal behaviour :

    image.png.e76d8bfebf94d1996b1659a1290b87ac.png

     

    somehow, when mover is activated, it writes to multiple drives at the same time. i don't think multiple drive writes is good, especially for the resulting write to parity.
    any ideas what could cause this?
    *i understand this might not be related to the scheduler plugin at all -> let me know if it should be moved.

  5. Running in a bit of a challenge here, where mover only seems to move a certain percentage of files.

    Expected operation -> cache share hits 95% usage, mover then moves all files to array. In stead it only moves roughly 20%.

    on the 23rd i did a manual move(Main Menu, the Move button) which took it down to +-10%.

    24th and 25th was the automaic moves once it hits 95%, but it only moved about 25%

    image.png.43605762a08d88fd71b6a840c39eedbb.png

     

    configuration as follows :

    image.png.3f5de84df9853634abe5f1a6a8201d52.png

    image.png.6cdf87ea371ffdff5253c445d2c512dc.png

     

    Any ideas? should i enable mover logging and/or test mode for further troubleshooting?

     

    Thx!

  6. On 3/10/2021 at 8:18 PM, privateer said:

    My setup is a combination of local + cloud storage using this setup. I'm using Unraid as an OS. Recently, I've encountered some issues with my CPU maxing out due to my Unraid use + the number of transcodes I have. Instead of upgrading the chips or adding a graphics card, I chose the less expensive solution of grabbing a dedicated plex box for transcoding using Quick Sync. The main reason for the decision was the setup was far cheaper ($80) and the overall power usage is low, so total cost is significantly lower. Also allows you to run other things on this box if you choose.

     

    Box has Ubuntu with Plex on bare metal. I mounted my local unraid drives and mounted the gdrives. I haven't maxed out my transcodes yet but looks like the box can likely support 15+ (although I would bet probably 20+). I'm only allowing transcodes on 1080p content.

     

    For people who are using a similar setup to me, I think this is a good solution. Just wanted to let everyone know this is an option!

     

    i might be interested in this -> which solution is $80?!?!

     

    also currently running plex on unraid , but no amount of explaining can convince the users to not use transcode...sigh. upgrading cpu will only get me so far, and no slots available for an nvidia card (or probably i believe it will limit bandwidth on other slots if i install one)

     

  7. i unfortunately cannot give 100% answers. I do know i had similar issues (to a lesser degree, but still) with a bad quality SSD (was the cheapest drive i could afford , but i brought the system to a halt everytime large write/read actions was running on it. Dockers didnt crash, but a plex database query for example timed out in many cases.

     

    Maybe see if system stability increases when bypassing the cache( guess it is a given that it will), and test from there onwards.

    maybe check if your PCIE bus is not being stressed?  (eg all the drives pushing through one x16 slot?) -> low chance, but it happened with my 24-drive system. (see the diskspeed docker)

     

    my ideas so far. would love to know what the cause ise

     

     

     

     

     

  8. Hi guys

     

    i might be 90 pages late on this.

    can someone give pointers on what i should configure the script in the following scenario :

    have /mnt/user/Movies and mnt/user/Series which still contain files (and will be for the forseeable future)

    i also have /mnt/disks/google/Movies and /mnt/disks/google/Series  (rclone mount) which contains a portion of my data which i loaded the last year.

    using a variation of the upload script to copy from /mnt/user/Series currently , where i will then selectively delete data on the local share once i am happy full series has been copied over.

    Plex obviously has both folders added (so some series might show two copies of an episode, but that doesn't affect usage)

    Sonarr is linked to the local folder only.

     

    question:

    How to move to a mergerfs solution?

     

    will i need to create a usershare called "local" and move both my movies and Series into that? or is there a way to use my current folder structure(preferred)?

     

    I have a feeling the answer is very simple, but looking through the script i just cant seem t

     

    thx team!

     

     

     

  9. Hi Ryzen team

    hope somebody following this thread have advice. I am having a challenge, where passing through the only ATI card to a windows VM on a ryzen system is working (Seabios without UEFI) , but then the  $%$^ hits the fan when i install the ATI drivers (crash during driver install, and after that VM wont start again.

    anybody encountered this and maybe have a possible thing i could try?


    more details here :

     

    thank you team!

     

     

     

  10. 17 minutes ago, luisv said:

    Just another option, I'm running the Asus Prime X370-Pro and same here, been extremely happy.  I'm using an LSI 9211-8i as I'm starting to run out of SATA ports.  More details are in my signature... any questions, feel free to ask.

    hmm ok that looks more like what im planning. are you doing passthrough though?  your mb/cpu does not provide onboard graphics capability.
    and then another question about headless passthrough i am trying to figure out. what happens if you shutdown the VM? will the unraid-host remain on and then display unraid CLI? 

  11. sorry for jumping in here , but hope you guys can help.

    i am planning to upgrade to a ryzen build - possibly 2700x, but i am having a challenge to select the MB.

     

    i have a 24 slot chassis, using all of the drives + a cache. I have currently 2 x 8 port SAS cards PCIe, and then a combination of onboard sata ports and older generation PCI cards providing the remainder.

    i want to go the passthrough route (ie one main system that performs both unraid and dialy usage)
    thus i will need :
    PCIex16 for main system graphics (im currently running an RX480) - my wish would be to have this passed through and unraid on headless as it will save a previous PCIe slot for a secondary GFX card

    2x PCIex8 for SAS cards 8 ports

    at least 8 onboard sata ports (or alternatively, 3 x 8port SAS cards)

    cache drive?

    32/64GB DDR4 3000 RAM

     

    the following ThreadRipper board seems to have enough PCIe slots, but i just cant seem to locate a AM4 socket that fits the bill:

    https://www.asrock.com/MB/AMD/X399 Taichi/index.asp

    obviously i would prefer to select a system that is proven to be stable :)

     

    TIA

     

    NeoX

  12. 5 hours ago, johnnie.black said:

    Because disk18 still has more free space than disk19.

     

    4 hours ago, itimpi said:

    Missed that in the screenshot!   Was intent on looking for a more complicated answer ;) 

     

    you had me saying "DUHHHHH" out loud in the office.  (although i always base it on a percentage). will balance/move some data  around tonight to test, but i believe this is definitely the answer!

    thank you guys!

  13. 6 hours ago, itimpi said:

    What Split Level do you have for the share in question?   Split Level will over-ride the Allocation method if there is any contention as to which disk to use.

     

    didn't touch split level for a long time (data equally distributed across the other 18 discs). believe it is level 2?

    image.png.43e0a9cc38da228577a5367c535ac49e.png

     

    sample directory under a share :

     

    share name : Movies

    Folder 1 : " A"  / "B" / "C"  etc

    Folder 2 : "Movie name (year)"

     

    currently all the drives contains "Folder 1", with "folder 2" being unique on all the drives.

     

  14. Hi unraid team

     

    i'm having a very very odd problem, where my user share just refuses to utilize one of my drives.  - Setup is very stable(8 years stable), and is working fine, even with a drive i added as little as a month ago(Disk#18 in screenshot) (pre-clear and added). the drive giving issues now(Disk#19) was previously a cache drive, but pre cleared before adding to the array.

    Unraid version 6.4.1 *edit - upgrade to 6.5.2, same issue persists*

     

    some of the things i have tried :

    • Check on Global share settings (with array stopped). Drive is enabled (nothing excluded)
    • check on Share settings for the specific shares - all drives is selected.
    • Copy data using MC directlty between drives - opening the user share shows that the files is visible (which makes me believe the drive is part of the share?)

     

    Thus the conundrum , why is "Most Free" allocation method not utilizing the drive? (Disk 19 in below screenshot)

    image.thumb.png.5cbeb9d9c5dc66ac3cb52d0caac3c48c.png

    image.thumb.png.16a0e3ab65a6c85e4ccc7c84fdf41fc6.png
    TIA!

    Neo_X
     

    storage-diagnostics-20180604-2158.zip (version 6.5.2)

     

    image.png

  15. 1 hour ago, binhex said:

     

    nice to know its still in use, i personally rely on MG for my movies, i have never used CP and don't really want to. The project isnt dead, its just in a holding state until i get around to it, i have big plans for MG, complete re-write, new UI, the works!, ive learnt a lot since i originally coded it up several years ago, and am keen to make it faster and cleaner than ever.

     

    so once i have all my automation in place for the docker images i produce i will hopefully move my focus onto whipping MG into shape, no timeline at the mo but i would love to get back on it by middle of this year, might see some good results by the end of this year.

     

    one question for you, you use usenet i see, can i ask what index sites MG is still working with?, im using only torrent sites right now so i have no idea which of the usenet index sites still work and which dont.

    Hi Binhex

     

    reverse question lol. which torrent providers are you running on MG?

  16. Nice review above!

     

    unfortunately i have had a bad experience, which i am not 100% sure it is related to the plugin but here goes.

     

    I helped a friend build a mini server the last week or two, where we picked up a HP gen8 micro server on special, to which we added 3 seagate 8TB archive HDD's.

     

    I managed to get the preclear working via the gui after creating an array , and then deleting the config again ( Preclear was complaining about a missing disk.cfg).

     

     

    in any event - we then connected to the gui every few hours - monitoring the progress, and everything went very ok . Preclear finished in about 72 hours which is roughly double the time my 4TB drive took - really ok with that - especially since we performed all three drives at the same time.

     

    Problem we encountered - directly after the preclear, the main page showed some strange values for the flash drive.

     

    Flash ID was missing ( with unraid requesting us to buy a key again),  Flash reported reads of something in the range of 144,000,000 (massive number - i might be missing some zero's ), and writes of about 16,000.

     

    restarting the machine confirmed my suspicion - the Flash gave up on us (in hindsight - i should have made a screengrab and captured the syslog)

     

     

    Is there any chance that the plugin performs constant reads from the Flash in any way? i dont think there should be a reason, but just in case.

     

    going to return the flash today, and  request a replacement key from Limetech - so server should be able to get going soon.

     

     

     

     

  17. hi guys

     

    i have recently upgraded the  s3_sleep plugin to the latest version, as well as unraid from 6.0.1 to 6.1 RC2.

     

    problem is, after that upgrade, i have issues all -over. mostly with drives not spinning down, or if they have spun down, s3_sleep still picks them up as running (as per dashboard status)

     

    is there any way to try and troubleshoot? I have rolled back to 6.0.1

     

    some captures below :

     

    root@Storage:~# ps -elf | grep disk

    1 S root      5001    1  0  80  0 - 38196 futex_ 22:39 ?        00:00:00 /usr/local/sbin/shfs /mnt/user0 -disks 32766 -o noatime,big_writes,allow_other

    1 S root      5012    1  0  80  0 - 55127 futex_ 22:39 ?        00:00:00 /usr/local/sbin/shfs /mnt/user -disks 32767 2048000000 -o noatime,big_writes,allow_other -o remember=0

    0 S root    23475 22208  0  80  0 -  1275 pipe_w 23:20 pts/0    00:00:00 grep disk

    root@Storage:~#

     

     

    root@Storage:~# ps -elf | grep mnt

    1 S root      5001    1  0  80  0 - 38196 futex_ 22:39 ?        00:00:00 /usr/local/sbin/shfs /mnt/user0 -disks 32766 -o noatime,big_writes,allow_other

    1 S root      5012    1  0  80  0 - 55127 futex_ 22:39 ?        00:00:00 /usr/local/sbin/shfs /mnt/user -disks 32767 2048000000 -o noatime,big_writes,allow_other -o remember=0

    0 S root    26702 22208  0  80  0 -  1275 pipe_w 23:25 pts/0    00:00:00 grep mnt

     

    diagnostics with syslog is attached.

     

     

    Please advise what other troubleshooting i can maybe do?

     

    thx

     

    Neo_x

     

     

     

    storage-diagnostics-20150811-2326.zip

×
×
  • Create New...