war4peace

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by war4peace

  1. Hello, I would very much appreciate if you could help a newbie around queue processing for Avidemux.
    I have it installed on my Unraid, in a docker, and I have quite a few videos (camera surveillance footage) which I need to cut parts and save them separately for a montage. Just family stuff, with family members coming and going when they visit us.
    I want to go through a few hundred of those footage files, execute the cut for each and add to queue processing. I am able to add a cut to a job, now Avidemux is asking me for a job name and output file. After filling that form, I assume the job is added to some list somewhere.
    Opening the docker container's command line from Unraid, I managed to find "/usr/bin/avidemux3_jobs_qt5", when running, the docker application shows an "Avidemux jobs" window and the CLI shows the following output:

     

    /usr/bin # ./avidemux3_jobs_qt5 
    Directory /root/.avidemux6/ exists.Good.
    Using /root/.avidemux6/ as base directory for prefs, jobs, etc.
    *************************
      Avidemux Jobs v2.8.1
    *************************
     http://www.avidemux.org
     Code      : Mean, JSC, Gruntster 
     GFX       : Nestor Di , [email protected]
     Design    : Jakub Misak
     FreeBSD   : Anish Mistry, [email protected]
     Audio     : Mihail Zenkov
     MacOsX    : Kuisathaverat
     Win32     : Gruntster
    
    Compiler: GCC Alpine Clang 13.0.1
    Build Target: Linux (x86-64)
     [jobInit] 21:14:57-696  Initializing database (/root/.avidemux6/jobs.sql)
     [ADM_jobCheckVersion] 21:14:57-697  Db version 3, our version 3
     [ADM_jobCheckVersion] 21:14:57-697  Same version, continuing..
     [jobInit] 21:14:57-697  Successfully connected to jobs database..
    QStandardPaths: runtime directory '/tmp/run/user/app' is not owned by UID 0, but a directory permissions 0700 owned by UID 99 GID 100
     [loadTranslator] 21:14:57-708  Using system language
     [loadTranslator] 21:14:57-708  Initializing language en_US
     [loadTranslator] 21:14:57-708  Translation folder is </usr/share/avidemux6/qt5/i18n/>
     [loadTranslation] 21:14:57-708  [Locale] Loading language file /usr/share/avidemux6/qt5/i18n/qtbase_en_US  [loadTranslation] 21:14:57-708  FAILED
     [loadTranslation] 21:14:57-708  [Locale] Loading language file /usr/share/avidemux6/qt5/i18n/avidemux_en_US  [loadTranslation] 21:14:57-708  succeeded
     [loadTranslator] 21:14:57-708  Updating translations...
     [loadTranslator] 21:14:57-708  [Locale] Test: &Edit -> &Edit
    
     [createBindAndAccept] 21:14:57-741  Binding on 127.0.0.1:0
     [createBindAndAccept] 21:14:57-741  Socket bound to port 45445
     [jobWindow] 21:14:57-741  Socket bound to 45445
     [refreshList] 21:14:57-741  Found 0 jobs

     

    The job list window is empty, though. 

    ...And this is where I am not sure how to continue any more. Any help would be greatly appreciated!
    Thank you!

  2. Hello everyone,

     

    I have started planning for a large upgrade to my existing Unraid server.

    Current specs:

    Unraid Pro on X570 motherboard, Ryzen 2400g, 2x nVME SSDs in RAID0 as fast cache, 2x SATA SSDs in RAID1, cache for VMs and docker, and a hodge-podge of various sized HDDs (6x pieces) ranging from 4 TB to 18 TB. One parity drive.

    Add-ons: 10g NIC and a LSI HBA which supports 8x HDDs.

    All in a tower case.

     

    Starting with my budget: up to €5000, but could go maybe 10% more if there's a good reason for it.

    Reason to upgrade: I am currently rebuilding my whole network, putting everything in a tall 19" rack, and current server's hardware specs don't allow for much expansion.

     

    Drive setup will certainly expand as time goes by, I only want to buy 18+ TB drives to maximize data density. Longer term goal is to have at least 1 PB of space within the next 3 years. Therefore, I want to have a good server infrastructure (mobo, CPU(s), RAM, HBA cards) in place before I start expanding space.

     

    I saw quite a few options for EPYC CPUs on Ebay, and I plan to buy from there. After looking long and hard at various options, I admit I'm a bit lost (too many things to choose from, I'm afraid), therefore I'd much appreciate some piece of advice.

     

    Starting with the HBA card(s?): I saw this as a good option, but open to others if they are better: LSI SAS 9300-16I 12GB/S HBA BUS ADAPTER CARD IT Mode 4*SFF-8643 SATA Cable

    Motherboard/CPU/RAM combo: Now, this is where it becomes complicated. I need 10g LAN for sure, but before that, I need to figure out which of these two options is preferrable: Single-CPU or dual-CPU. I'd rather splurge on a future-proof build, rather than beat myself up two years from now for not going the right route, so here are two options:

    1. Single-CPU board: Supermicro MBD-H12SSL-CT-O Socket SP3/ Single AMD EPYC 7002/ DDR4/ ATX Server MB.

    2. Dual-CPU board: Supermicro H12DSi-NT6

     

    CPUs: EPYC 7742 (for single-CPU board) or EPYC 7642 (for Dual-CPU board)

     

    RAM: since I don't need enormous RAM speed for VMs or other things I plan to do (ZFS comes to mind), 2133 MHz DDR4 ECC would suffice. Quantity over speed is what I am looking for. Example: Samsung 64GB (1x64GB) DDR4 Registered ECC PC4-19200 2400Mhz (16x for dual-CPU, 8x for single-CPU).

     

    Now, the big question is: would there be a net advantage of dual-CPU versus single-CPU? I'd like to know your opinions on this.

     

    Moving on: For fastcache (temporary files), I'd still use 2x nVME drives, but I am not sure RAID0 would bring any advantage, might want to go for RAID1 as well. 2x2TB drives is my goal. As for docker cache pool, SATA SSDs should suffice. One more thing I am considering, since there will be plenty PCI Express lanes available, is to buy a PCI-E card which holds 4x nVME drives later on and use those for VMs exclusively.

     

    Case: this is where I am truly lost. Having everything in one case is more compact, but... not sure if it's a good idea. I saw this case (IPC 4F28 MINING-RACK) which is not expensive, and holds a ton of HDDs, but would it fit the whole build? if not, I am looking for a combo (one dedicated drive case and one system case, both rackable), open to suggestions because I frankly have nothing in mind in that case.

    Add-ons: I have a pair of watercooled (from factory) GTX 1080 Ti GPUs which I plan to put in the build, for real-time transcoding of my surveillance cameras streams (6x 4K + 2x 1440p), as well as Tdarr whenever applicable. I will repurpose the existing 10g NIC, so that'll not be used any more.

     

    Power delivery and UPS: I have a brand new Cyberpower OLS2000ERT2U which can, if needed, be beefed up with a BPSE72V45ART2U battery pack. However, I am not sure whether I'll need server-grade PSUs or not. Advice is greatly appreciated!

     

    Cooling and noise: My current plan is to watercool both CPU(s) and GPUs using a MoRa 420 with dual-pump, and for all other hardware (HDDs and whatnot), I already own several Noctua Industrial 3000 RPM fans (120mm). Noise is of no importance, everything sits in my garage in a technical area, nobody leaves there except the odd spider which makes its way in. I am pretty sure they will not complain.

     

    Well... it's quite the topic, now that I am looking back at it. Hopefully there's enough information for some informed advice from the experts here.

     

    Thank you all in advance for your replies, looking forward to read some awesome responses!

     

  3. Hello,
    After reading through most of this topic, I wasn't able to find a way to add/install one of the utilities which would generate an image from a STL file.

    Now, before I come across as a needy person, I have done my research and attempted to self-service my issue, but alas, I am not that smart.

    One command line tool is this:
    https://github.com/unlimitedbacon/stl-thumb
    And the other is this:
    https://github.com/aslze/minirender

    Any would suit my needs.
    The reason to ask is, I have a large collection of STL files which I would like to generate thumbnail images for. I can create scripts which use the command line tools, but I haven't found a way to compile and use them under Unraid. I also should mention I do have a way to generate thumbnails using a script and a different tool from my remote Windows machine, but that means I need to read each STL over wireless and generate a thumbnail. Some of those files are BIG (up to 1 GB each) and it takes an enormous amount of time. Generating images locally would be preferrable.

    So... if it's not that big of a hassle, could you please add any of those tools to NerdTools?

    Thank you in advance!

  4. I am trying to use the script to zero out Disck #6 from my array.
    Followed all instructions, emptied the drive using Unbalance, drive is now empty.
    I created a folder called 'clear-me', exactly as specified.

     

    Running the script yields the following:

    root@Tower:/boot/config/cleardrive# sh clear_array_drive 
    *** Clear an unRAID array data drive ***  v1.4
    
    Checked 6 drives, did not find an empty drive ready and marked for clearing!
    
    To use this script, the drive must be completely empty first, no files
    or folders left on it.  Then a single folder should be created on it
    with the name 'clear-me', exactly 8 characters, 7 lowercase and 1 hyphen.
    This script is only for clearing unRAID data drives, in preparation for
    removing them from the array.  It does not add a Preclear signature.

     

    Could it be the way I created the folder?

    I used this command from Terminal:

    cd /mnt/disk6/
    
    mkdir clear-me


    The disk is empty and freshly formatted, with just that folder on it.

    root@Tower:/boot/config/cleardrive# du /mnt/disk6
    0       /mnt/disk6/clear-me
    16      /mnt/disk6

     

    Any idea why the script dislikes the disk/folder?

  5. Hello everyone,
    This issue puzzles me, and I can't figure out why it's happening.
    A few days ago, I installed a 10g network card in my Unraid server. The card was detected and used with no problem, but some (not all!) docker apps stopped working for some reason, with no obvious message as to why.

    Some examples:
    Shinobi Pro: Worked flawlessly for a year or so, no errors, after changing the network card, all monitors were shown as "not started". I exported all monitor settings, installed another Shinobi Pro docker from another repo, reimported the monitors and they worked fine.

    Grafana-Unraid-Stack: worked flawlessly before I changed network card, now it crashes when starting, all dashboards show no data, docker log below:

     

     * Starting Grafana Server
       ...fail!
    [info] All done
    
    [error] influxdb crashed!
    [info] loki PID: 60
    [info] Skip hddtemp due to USE_HDDTEMP set to no
    [error] telegraf crashed!
    [info] promtail PID: 75
    [error] grafana crashed!

     

    Reinstalling the container didn't change anything.

     

    Many other docker apps work perfectly after network card addition.
    The previous NIC (motherboard integrated) is disabled.

    Any idea what might have caused these docker apps to misbehave, and how to fix GUS issue? 

    Thank you!

  6. Hello community,

    I have Unraid 6.9, working great with absolutely no issue... except for one.

    Every time the machine restarts or shuts down, all the data from /home (under root) disappears completely.

    I have created a couple scripts with the User Scripts plugin, they use a bash script called "discord-notify" and ffmpeg static build which are both saved in /home folder. 

    A few weeks ago I shut down the server to physically move it from one location to another. Upon boot, everything from the /home folder was gone. Vanished without a trace. I didn't think much of it, redownloaded the ffmpeg static build and recreated the script, all was good, until two days ago, when there was a power outage. The UPS held for a while, and Unraid was configured to gracefully shut down if the UPS battery was under 20%. Unraid shut down just as expected, all was good, however when I started it back, all files from /home were gone again.

    Is this expected behavior? Should I place those files somewhere else? And why would the files from /home not stay there across reboots/shutdowns?

  7. Hello,

     

    I am a Localization Specialist for Romanian localizations of games and software.
    I'm interested in localizing and maintaining a Romanian version of Unraid.

    Please let me know if/when you consider a Romanian version of Unraid GUI, and I'll be happy to help.

    Portfolio can be provided on demand.

  8. Happy to report I have it running, installed a few dockers I need, almost finished copying all data from individual HDDs to the array (down to the last HDD now), and looking at ways to further tune/optimize the setup.

    Big thanks to everyone who helped, and to the community in general, for being no less than AWESOME!

  9. My TVs are connected to 5 GHz wireless, there is enough bandwidth for direct streaming.

    On another note, I have decided not to bother doing the hardware upgrade later, so I ordered new Motherboard, CPU, PSU and RAM.

    1. Motherboard: ASRock X570 Phantom Gaming 4
    2. CPU: AMD Ryzen 5 2400G (I wanted a 3400g but it's nowhere to be found where I live)
    3. PSU: Corsair RM850 2019 - 12 SATA connectors
    4. RAM: Corsair Vengeance LPX 32GB (2x16), DDR4, 3000MHz
    5. A couple USB 2.0 sticks for Unraid.

    They should all arrive tomorrow.

     

    This way I will be able to use both nVME SSDs as well as the 2x 500 GB SATA SSDs I bought. The SSDs as well as the new HDDs are already installed in the case. The LSI controllers also arrived at customs yesterday, waiting for processing.

    I'm pretty excited about the whole thing, it's my first time using Unraid, and I guess I'll mess up a few things, but in the end it's a learning process.

    Thank you all for your replies, I appreciate it, you've helped me clarify quite a few things!

    • Like 1
  10. @kimifelipe
    Thank you for the advice.
    So the plan would become:

    1. Configure LSI controller.
    2. Install Unraid on USB stick, launch, run initial configuration.
    3. Preclear 2x14 TB HDDs, preclear 4 TB empty HDD, create unprotected array using all three disks. That gives me 32 TB of empty space to copy the data to.
    4. Connect 1x 14 TB HDD (the full one), copy data over to the array.
    5. Preclear the 14 TB HDD, set it as a parity drive (I don't mind subsequent data transfers taking longer).
    6. Copy the 8 TB drive data, preclear 8 TB HDD, add to array.
    7. Copy the 10 TB data, figure out what to do with it (shuck with WD fix, or sell, or use it for something else, e.g. DVR connected to a TV, etc.)
    8. Play with Unraid apps and whatnot.

    @vinid223
    Cache filling: yes, it could fill up really quickly. I am not sure how Unraid works regarding Internet downloads, but I am taking into consideration the fact that I can easily reach 8 TB a day through downloading. I could start by adding both 500 GB SATA SSDs as well as the 256 GB nVME drive to the cache pool and see how it behaves when downloading at max bandwidth, then decide if I should expand it or not. If it's enough, this means two more SATA connections available for large HDDs, which is way more helpful.
    As for GPU: there is no transcoding happening in the foreseeable future. Streaming locally to TVs, one file at a time, two max in very rare occasions. But never say never; if the need arises, I could pry out one of my watercooled 1080 Ti cards and slam it in the Unraid server :)

     

  11. Thank you very much for your reply. I was hoping someone would :)
    I will structure my reply by points, makes it easier to continue the conversation.

     

    1. Parity drives: Yes, there will be 2 parity drives, I will buy two more 14TB drives in April, one of which will be the second parity drive. Long term plan is to change motherboard and CPU, have two LSI Controllers (8 HDDs + 16 HDDs), with parity and everything.
    2. Copying: would the cache fill very quickly, given read speed from one HDD is roughly equal to write speed to the destination HDD? At any rate, I will enable the cache pool after finishing all transfers, so thank you for this advice. I was under the impression that parity needs to be enabled first, otherwise it might not work. Also, I don't mind it the transfer taking a long time, as long as they go well.
    3. Cache pool: Yes, RAID1 was the plan. The SATA SSDs are identical, and they were not expensive (120 bucks both), and to be honest I am strongly considering buying two more fort the cache pool. Maybe it's overkill... you tell me :)
    4. The nVME SSD has no other destination but to hold PLEX and PLEX cache, as well as some containers I will likely have to add (Torrents, Usenet) - this is something I will have to read about, investigate, howtos, tutorials, documentation...
    5. No VMs. My main PC is powerful enough for a couple VMs, and I plan on upgrading it anyway towards the end of the year.
    6. No transcoding. PLEX is only for in-house usage, through 5GHz wireless, "Disable video stream transcoding" option is enabled. The GPU card is there because current CPU has no integrated GPU. It will go away once I upgrade my NAS hardware and I will use an integrated GPU.

    Once again, thank you very much for your reply, I appreciate you taking the time to do so.

  12. Hi everyone,

     

    I am a brand-new member, and happy to be here. As a beginner, I understand that some of my questions might be puerile, we've all been there, so please be gentle with me.

     

    Without further ado, here's my current situation:

    I have a homemade NAS running Windows 10 Pro x64 (I know, I know...) and despite the general consensus about such a choice, it has served me well. The main scope of the NAS is to hold large amounts of data (mainly media files) and PLEX. Its hardware configuration is nothing special:

    Motherboard: Gigabyte B450 mATX
    CPU: Ryzen 3 1200
    RAM: 16 GB DDR4
    A throwavay GPU (GT 210)
    Storage:

    1. 2x 256 GB SSDs (nVME), one installed on the motherboard, the other in a PCI Express adapter. The first SSD is used for the OS and the other is used as a scratch disk for downloads (I have Gigabit Fiber and downloading at 100 MB/s consistently).
    2. HDDs:
    • 1x 4 TB, internal (empty)
    • 1x 8 TB, internal (full)
    • 1x 10 TB, external (USB 3.0) (full) - this is a WD drive, I don't plan on shucking it.
    • 1x 14 TB, external (USB 3.0) (full) - this is a Seagate drive, I will shuck it after emptying it.
    • 2x 14 TB, brand new, shucked (Seagate IronWolf NAS) (empty)

     

    Obviously, current configuration reached its limit, therefore I have planned an intermediary upgrade path.
    1st step (already complete) was to buy 2x LSI 9210-8i controllers. I will configure both, then replace the scratch disk (256 GB nVME SSD in the adapter) with one controller. I have also bought a larger case which can hold 10x HDDs and 2x SSDs. FWIW it's a Gamemax Titan Silent M905S. Another thing I have bought is 2x WD Blue™ SATA SSD 2.5” (500 GB each), which I will use as cache pool.

    Yes, I am considering replacing the mobo/CPU/RAM, my personal preference would be an AMD 2400G (to free up the motherboard PCI Slot #1) with a X570 motherboard with as many SATA connectors as possible, but that's not going to happen right away (budget constraints), so for now I will use what I have.

    One of the 14TB HDDs is going to be used as a parity drive, the plan is for all the other HDDs (except the external 10 TB drive) to be connected to the LSI controller and merged into one large storage pool. Because I have a large amount of data (almost 32 TB), I am going to have to split this into multiple steps, and here's where I would need some advice, aka sanity check:
     

    1. Create the array, with 1x 14 TB (parity), 1x 14 TB + 1x 4 TB (data), 2x 500 GB SSDs (cache pool).
    2. Copy over data from the full 14 TB drive to the array.
    3. Add the 14 TB drive to the array.
    4. Copy over the data from the 8 TB drive to the array.
    5. Add the 8 TB drive to the array.
    6. Copy over the 10 TB drive to the array.
    7. Remove the 10 TB drive from the array, maybe try to shuck it and take care of the 3.3V WD proprietary pin, then add it too to the array, or keep it as backup for important data, as an extra layer of protection. Haven't decided yet.

     

    On to my questions, then:

     

    1. Is the above plan sound? Am I missing something basic which would ruin it?
    2. After all the above has been finished, I will be left with a 256 GB nVME SSD as an unassigned drive. I want it to be used for PLEX (and its media cache). From the documentation it looks to be possible, but I might have missed something, advice would be helpful.
    3. Anything else you might think of, that would help me in my endeavor, would be awesome!

     

    Sorry for the very long thread starters, I like being thorough, even it it might look like "too much information" :)

    And thanks a lot, in advance, for your answers!