Neo_x

Members
  • Posts

    116
  • Joined

  • Last visited

Everything posted by Neo_x

  1. Hi Tom /Team 10 years plus user here (struggling to find that email where i bought my pro license). Difficult choice you guys are facing, but a business has to remain afloat. a possible option, for lifetime pro users, is to donate to the limetech project. ie have a website track the upcoming expenses for the year, and showing how much of that total has been met? understand it is a bit of a shift, but i will 100% support your product, witha bit more visibilty on where the money goes (wnat what you are short) even if it is a small amount of maybe 10/20/30USD per year (user choise), i believe it will fill a few holes. keep up the good work! Regards
  2. thank you. Typically 4k h265 HDR to 4k h264 -> don't see a reason to go lower Following the thread at a clance it seems tone mapping only recently started working. any detail is welcome! thank you sir
  3. can anyone confirm how many 4k transcodes the 12900 can handle? seeing mixed results online, and are looking to see if what combination of hardware would give a cost effective amount of streams. Power usage not that important (<400watt under load should be fine) my alternativate would be to rather go nvidia gpu, but then HDR tone mapping needs to be supported
  4. hi guys just checking if this is normal behaviour : somehow, when mover is activated, it writes to multiple drives at the same time. i don't think multiple drive writes is good, especially for the resulting write to parity. any ideas what could cause this? *i understand this might not be related to the scheduler plugin at all -> let me know if it should be moved.
  5. Running in a bit of a challenge here, where mover only seems to move a certain percentage of files. Expected operation -> cache share hits 95% usage, mover then moves all files to array. In stead it only moves roughly 20%. on the 23rd i did a manual move(Main Menu, the Move button) which took it down to +-10%. 24th and 25th was the automaic moves once it hits 95%, but it only moved about 25% configuration as follows : Any ideas? should i enable mover logging and/or test mode for further troubleshooting? Thx!
  6. Now this will be interesting. thank you DZMM -> i must admit. trusting your script to perform its magic is really something owe you a beer or three by now!
  7. Same issue in 2021 - > Kaspersky security cloud (free) , Web antivirus was causing log file pop up not to work (showed a spinning icon, but log file wont display) https scanning was disabled (and dont have https running on unraid) without effect. Had to disable protection -> web antivirus.
  8. i might be interested in this -> which solution is $80?!?! also currently running plex on unraid , but no amount of explaining can convince the users to not use transcode...sigh. upgrading cpu will only get me so far, and no slots available for an nvidia card (or probably i believe it will limit bandwidth on other slots if i install one)
  9. i unfortunately cannot give 100% answers. I do know i had similar issues (to a lesser degree, but still) with a bad quality SSD (was the cheapest drive i could afford , but i brought the system to a halt everytime large write/read actions was running on it. Dockers didnt crash, but a plex database query for example timed out in many cases. Maybe see if system stability increases when bypassing the cache( guess it is a given that it will), and test from there onwards. maybe check if your PCIE bus is not being stressed? (eg all the drives pushing through one x16 slot?) -> low chance, but it happened with my 24-drive system. (see the diskspeed docker) my ideas so far. would love to know what the cause ise
  10. AM I GOING CRAZY? i saw a post yesterday saying that mergerfs did not install after a reboot (have me quite scared lol since we have regular power problems.) assume it resolved itself?
  11. Hi guys i might be 90 pages late on this. can someone give pointers on what i should configure the script in the following scenario : have /mnt/user/Movies and mnt/user/Series which still contain files (and will be for the forseeable future) i also have /mnt/disks/google/Movies and /mnt/disks/google/Series (rclone mount) which contains a portion of my data which i loaded the last year. using a variation of the upload script to copy from /mnt/user/Series currently , where i will then selectively delete data on the local share once i am happy full series has been copied over. Plex obviously has both folders added (so some series might show two copies of an episode, but that doesn't affect usage) Sonarr is linked to the local folder only. question: How to move to a mergerfs solution? will i need to create a usershare called "local" and move both my movies and Series into that? or is there a way to use my current folder structure(preferred)? I have a feeling the answer is very simple, but looking through the script i just cant seem t thx team!
  12. i am using CA appdata backup it does however seem to be focused on appdata, but includes backup options for flash as well as vm's works very well
  13. Can confirm this feature works. however, is there possibly a method/setting to revert behaviour to work as before? i have situations where i want the mover to complete quicker rather than delay due to a single active stream (that uses Kb's odf data verus about 50MB's of transfer speed that is unavailable for the mover.
  14. Confirmed issues is resolved. thus configuration issue only. @johnnie.black Thank you sir for picking it up! Might not be a bad idea to update the GUI to show entered numbers in more readable units. (i was caught off guard that it is multiplying the entered value by 1024...
  15. very interesting (although why only allocate 130GB when 147GB is used according to the GUI and DF -H? DOH i didn't read the guide properly. changed it to 20GB now. will test normal operation and report back thank you!
  16. Hi guys since i am running a RC, i believe it is better to ask for help here. I have a case where my dockers started crashing /misbehaving. i also noted that some data was written to the array even though the shared is set to use cache as preferred. giving the system a reboot last night , i thought that would resolve the issue. no luck this morning i could focus a bit more on the issue, where i found in the syslog that it reports "shfs: cache disk full" this is odd, since the GUI shows that there is more than enough space (i have mover scheduled to run on a daily basis to clear out all data that does not need to be on the cache). I have however had a few high usage days, where i think the cache was pushed beyond its capacity, which might be related (Cache is however set for minimum free space of 20GB, which i believe was sufficient for the Docker appdata) GUI shows the following: Pool is configured to be Raid-0, but the details given via the cache settings on GUI does not seem correct (Data should be 480GB) Data, RAID0: total=130.00GiB, used=128.97GiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=5.00GiB, used=3.63GiB GlobalReserve, single: total=282.20MiB, used=0.00B What i have tried : rebalance (a few times -with dockers stopped) Trim, then rebalance again so far nothing seems to reset the total capacity to the correct value. does anybody have an idea as to what could be causing this? i am busy attempting to move appdata to the array, but plex metadata (2mil plus files) is slowing the proces down) diagnostics attached DF -H output : root@Storage:/boot# df -H Filesystem Size Used Avail Use% Mounted on rootfs 17G 917M 16G 6% / tmpfs 34M 934k 33M 3% /run devtmpfs 17G 0 17G 0% /dev tmpfs 17G 0 17G 0% /dev/shm cgroup_root 8.4M 0 8.4M 0% /sys/fs/cgroup tmpfs 135M 3.4M 131M 3% /var/log /dev/sda1 16G 621M 15G 4% /boot /dev/loop0 9.4M 9.4M 0 100% /lib/modules /dev/loop1 5.6M 5.6M 0 100% /lib/firmware /dev/md1 4.0T 4.0T 86G 98% /mnt/disk1 /dev/md2 4.0T 3.9T 155G 97% /mnt/disk2 /dev/md3 4.0T 4.0T 28G 100% /mnt/disk3 /dev/md4 4.0T 3.9T 113G 98% /mnt/disk4 /dev/md5 4.0T 4.0T 84G 98% /mnt/disk5 /dev/md6 3.0T 3.0T 76G 98% /mnt/disk6 /dev/md7 3.0T 3.0T 74G 98% /mnt/disk7 /dev/md8 3.0T 3.0T 90G 98% /mnt/disk8 /dev/md9 3.0T 3.0T 79G 98% /mnt/disk9 /dev/md10 3.0T 3.0T 45G 99% /mnt/disk10 /dev/md11 4.0T 4.0T 78G 99% /mnt/disk11 /dev/md12 3.0T 2.9T 129G 96% /mnt/disk12 /dev/md13 3.0T 2.8T 204G 94% /mnt/disk13 /dev/md15 4.0T 3.9T 138G 97% /mnt/disk15 /dev/md16 4.0T 3.8T 218G 95% /mnt/disk16 /dev/md17 4.0T 3.8T 222G 95% /mnt/disk17 /dev/md18 3.0T 2.9T 125G 96% /mnt/disk18 /dev/md19 2.0T 1.9T 184G 91% /mnt/disk19 /dev/md20 8.0T 7.6T 451G 95% /mnt/disk20 /dev/md21 8.0T 7.9T 190G 98% /mnt/disk21 /dev/md22 4.0T 3.8T 207G 95% /mnt/disk22 /dev/sdt1 481G 147G 331G 31% /mnt/cache shfs 82T 80T 3.0T 97% /mnt/user0 shfs 83T 80T 3.3T 97% /mnt/user google: 1.2P 6.0T 1.2P 1% /mnt/disks/google root@Storage:/boot# other commands : root@Storage:/boot# btrfs fi df /mnt/cache Data, RAID0: total=130.00GiB, used=128.97GiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=5.00GiB, used=3.63GiB GlobalReserve, single: total=282.20MiB, used=0.00B root@Storage:/boot# btrfs fi show /mnt/cache Label: none uuid: a38f3698-5c6d-43b0-aa5d-1aaee1e81822 Total devices 2 FS bytes used 132.61GiB devid 1 size 223.57GiB used 70.03GiB path /dev/sdt1 devid 2 size 223.57GiB used 70.03GiB path /dev/sdo1 root@Storage:/boot# Thank you guys much appreciated! storage-diagnostics-20191119-1229.zip
  17. thank you sir! Time to watch Library = Movies Days = 748 Library = TV Shows Days = 3203 Library = other videos(view by folder) Days = 24 a cool 10.9 years . I might need to retire sooner than i thought
  18. how did you get this measure? checking on tautilli now, it only give me Played statistics. sitting on 10,509 movies and 3042 tvshows(131169 episodes)
  19. /bump anybody have ideas i could try? i attempted to switch to a pre-installed windows 10 with drivers already loaded, where some progress was made, but it still seemed unstable (no crashes, but was unable to get any software/games started that required 3D) will try to install drivers again , or if all else fails, see if i can get a secondary card installed (challenge being that i need to sacrifice an 8 port SAS card which will reduce my array capacity by 8 drives )
  20. Hi Ryzen team hope somebody following this thread have advice. I am having a challenge, where passing through the only ATI card to a windows VM on a ryzen system is working (Seabios without UEFI) , but then the $%$^ hits the fan when i install the ATI drivers (crash during driver install, and after that VM wont start again. anybody encountered this and maybe have a possible thing i could try? more details here : thank you team!
  21. Hi everybody i am running at wits end here, and hope somebody with a similar system/setup can provide guidance - especially since it seems like with an ATI system i should not be encountering this I am trying to get a stable windows 10 VM running for gaming purposes, in an attempt to save on buying additional hardware for two machines. some notes Asus primce X370 Pro (latest bios loaded) Ryzen 2700x CPU HIS ATI RX 480 GFX ( only card installed, no onboard graphics available) 2 x 8 port SAS cards ( thus utilizing all three PCIEx16 slots) 64GB ram (Corsair LPX 3000MHz C15 Memory Kit for DDR4 Systems) - i believe the bios with default settings is not running this at 3000mhz yet. (will play around adjusting this one i get a stable solution going) No ACS multifunction enabled( alll devices i want to pass through is in separate IOMMU groups) - ACS multifunction was tested however with no changes in the below issues. 64 bit radeon software adrenalin edition 18.10.2-oct25 was utilized First i tried with a UEFI setup ( UEFI enabled in bios as well as Unraid Flash, and then creating an OVMF windows 10 template with graphics card passed through (including a captured bios- the one downloaded from techpowerup didnt work). Starting the VM on OVMF bios did not produce any output on the connected monitor - not sure why. Reading through a few guides, recommended disabling UEFI and creating the VM using Seabios(i also disabled hyper-v as part of my troubleshooting) this presented me with the normal windows installation, which then allowed me to get into the desktop and install the Virtio drivers for the three unknown devices (network especially), and disabled windows UAC. after this i restarted the VM, downloaded the ATI drivers, and installed. roughly on 60% (usually where a normal computer display will go blank for a few seconds before resuming the driver install), the Passed through display switched off, and never returned. KVM XML and diagnostics attached to match time entries in Diagnostics syslog, please see below : 19:35 start VM (default VGA driver only - RX480 passed through) 19:44 - installation started Custom. Graphics and audio driver only 19:47 : roughly 60% - screen goes dark, no further output from VM 19:52 - attempt normal VM stop via unraid web gui- spinning icon for 20 seconds, then shows normal started icon again. nothing on VM logs or Syslog to indicate any issues. 19:54 force stop and start VM windows logo is shown, spinning icon, then freezes no error on VM or system log. have to force stop. not recoverable. I am really not sure what i could be doing wrong here. any body have some additional pointers i could try? Thank you team! storage-diagnostics-20181101-2031.zip windows 10 .xml
  22. hmm ok that looks more like what im planning. are you doing passthrough though? your mb/cpu does not provide onboard graphics capability. and then another question about headless passthrough i am trying to figure out. what happens if you shutdown the VM? will the unraid-host remain on and then display unraid CLI?
  23. thank you! That looks like it might fit the bill care to share your build? (eg what cards installed, and if you are able to do headless passthrough?)
  24. sorry for jumping in here , but hope you guys can help. i am planning to upgrade to a ryzen build - possibly 2700x, but i am having a challenge to select the MB. i have a 24 slot chassis, using all of the drives + a cache. I have currently 2 x 8 port SAS cards PCIe, and then a combination of onboard sata ports and older generation PCI cards providing the remainder. i want to go the passthrough route (ie one main system that performs both unraid and dialy usage) thus i will need : PCIex16 for main system graphics (im currently running an RX480) - my wish would be to have this passed through and unraid on headless as it will save a previous PCIe slot for a secondary GFX card 2x PCIex8 for SAS cards 8 ports at least 8 onboard sata ports (or alternatively, 3 x 8port SAS cards) cache drive? 32/64GB DDR4 3000 RAM the following ThreadRipper board seems to have enough PCIe slots, but i just cant seem to locate a AM4 socket that fits the bill: https://www.asrock.com/MB/AMD/X399 Taichi/index.asp obviously i would prefer to select a system that is proven to be stable TIA NeoX