Leaderboard

Popular Content

Showing content with the highest reputation on 11/20/20 in Posts

  1. Let's work together to repair this relationship. I love this community and what unRAID has become. Seeing it evolve over a decade shows the strength of this one united large family. Let's fix this, we are better than that. I have re-read the comments and although they are good points, this does not help us re-unite. @limetech, came through and did the initial steps with a public apology attempting to repair the relationship and yes, wounds do not heal overnight, but it is a two-way street. I cannot repair it, but from a community member perspective, it hurts to see it fall apart - let's stay united and kind. Again, we are better than this.
    6 points
  2. Please point one of these out to me. FWIW I haven't received any email or PM from someone trying to be a Comm Dev with unanswered technical questions. I don't have a lot of time to monitor the entire forum. Mostly these days I monitor Prerelease and any topics which are specifically brought to my attention. Also I've always said that Unraid is not for everyone. We really have tried hard though to remain friendly and cooperative. If you feel like you have not been treated fairly, I apologize for that and suggest perhaps a different server solution would be better for you to be involved with.
    3 points
  3. Tom has done all he could be expected to do in this situation. Unfortunately, the genie is out of the bottle and cannot be put back. He has acknowledged that the rift was entirely of his making and has outlined every misstep he made which caused the situation. Some of the LSIO guys impacted by this are probably taking a break to let things simmer down and consider what, if anything, should be their public response post-apology. It may in fact be best to let it alone publicly for a while. If the schism is repaired behind the scenes and (and as alluded to by Jon P.) steps are taken to "better engage with the community beyond just this issue" then I think that will be at least one silver lining in this rather large storm cloud and perhaps some of the additional steps mentioned by @danioj can be taken. EDIT: BTW, I support the sentiment expressed by danioj but I suspect those impacted most might also be taking some time to measure their response.
    3 points
  4. Those following the 6.9-beta releases have been witness to an unfolding schism, entirely of my own making, between myself and certain key Community Developers. To wit: in the last release, I built in some functionality that supplants a feature provided by, and long supported with a great deal of effort by @CHBMB with assistance from @bass_rock and probably others. Not only did I release this functionality without acknowledging those developers previous contributions, I didn't even give them notification such functionality was forthcoming. To top it off, I worked with another talented developer who assisted with integration of this feature into Unraid OS, but who was not involved in the original functionality spearheaded by @CHBMB. Right, this was pretty egregious and unthinking of me to do this and for that I deeply apologize for the offense. The developers involved may or may not accept my apology, but in either case, I hope they believe me when I say this offense was unintentional on my part. I was excited to finally get a feature built into the core product with what I thought was a fairly eloquent solution. A classic case of leaping before looking. I have always said that the true utility and value of Unraid OS lies with our great Community. We have tried very hard over the years to keep this a friendly and helpful place where users of all technical ability can get help and add value to the product. There are many other places on the Internet where people can argue and fight and get belittled, we've always wanted our Community to be different. To the extent that I myself have betrayed this basic tenant of the Community, again, I apologize and commit to making every effort to ensure our Developers are kept in the loop regarding the future technical direction of Unraid OS. sincerely, Tom Mortensen, aka @limetech
    2 points
  5. If your data is sensitive and valuable, I wouldn't transport 3.5" spinning hard drives like that. If you want some flexibility and ruggedness, it would be worth your while to use SSD's and have spinning drives for backups at an offsite, stationary location. It'll significantly lower the weight of your system as well as require far less room.
    2 points
  6. Although I can't speak for my fellow team members, decisions like these usually aren't taken by a single event, in most cases it's just the final drop...not saying it was like that in this case....but don't start "expecting" stuff in return just because a cheap apology was made. Some new developers trying to get into the Unraid community are belittled and left hanging without a proper response, you don't see them getting an apology.... IMO, the backlash was just too big in this case and that's why you get this nice apology, just simple business damage control at its finest. Don't let that be a reason to guilt others into doing things they maybe want/don't want to do. Enjoy the post while it lasts...
    2 points
  7. In a world where most of the people are even unable to admit mistakes and never apologize for anything I highly appreciate @limetech statement. Everybody doing mistakes on a daily basis and nobody is protected against that - PERIOD! IMHO -- True stature shows itself in small things and an apologize was stated. Now its time that the devs show true greatness and accept Toms apologize.
    2 points
  8. Nvidia-Driver (only Unraid 6.9.0beta35 and up) This Plugin is only necessary if you are planning to make use of your Nvidia graphics card inside Docker Containers. If you only want to use your Nvidia graphics card for a VM then don't install this Plugin! Discussions about modifications and/or patches that violates the EULA of the driver are not supported by me or anyone here, this could also lead to a take down of the plugin itself! Please remember that this also violates the forum rules and will be removed! Installation of the Nvidia Drivers (this is only necessary for the first installation of the plugin) : Go to the Community Applications App and search for 'Nvidia-Drivers' and click on the Download button (you have to be at least on Unraid 6.9.0beta35 to see the Plugin in the CA App) : Or download it directly from here: https://raw.githubusercontent.com/ich777/unraid-nvidia-driver/master/nvidia-driver.plg After that wait for the plugin to successfully install (don't close the window with the , wait for the 'DONE' button to appear, the installation can take some time depending on your internet connection, the plugin downloads the Nvidia-Driver-Package ~150MB and installs it afterwards to your Unraid server) : Click on 'DONE' and continue with Step 4 (don't close this window for now, if you closed this window don't worry continue to read) : Check if everything is installed correctly and recognized to do this go to the plugin itself if everything shows up PLUGINS -> Nvidia-Driver (if you don't see a driver version at 'Nvidia Driver Version' or another error please scroll down to the Troubleshooting section) : If everything shows up correctly click on the red alert notification from Step 3 (not on the 'X'), this will bring you to the Docker settings (if you are closed this window already go to Settings -> Docker). At the Docker page change 'Enable Docker' from 'Yes' to 'No' and hit 'Apply' (you can now close the message from Step 2) : Then again change 'Enable Docker' from 'No' to 'Yes' and hit again 'Apply' (that step is only necessary for the first plugin installation, you can skip that step if you are going to reboot the server - the background to this is that when the Nvidia-Driver-Package is installed also a file is installed that interacts directly with the Docker Daemon itself and the Docker Daemon needs to be reloaded in order to load that file) : After that, you should now be able to utilize your Nvidia graphics card in your Docker containers how to do that see Post 2 in this thread. IMPORTANT: If you don't plan or want to use acceleration within Docker containers through your Nvidia graphics card then don't install this plugin! Please be sure to never use one card for a VM and also in docker containers (your server will hard lock if it's used in a VM and then something want's to use it in a Container). You can use one card for more than one Container at the same time - depending on the capabilities of your card. Troubleshooting: (This section will be updated as soon as more someone reports an issue and will grow over time) NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.: This means that the installed driver can't find a supported Nvidia graphics card in your server (it may also be that there is a problem with your hardware - riser cables,...). Check if you accidentally bound all your cards to VFIO, you need at least one card that is supported by the installed driver (you can find a list of all drivers here, click on the corresponding driver at 'Linux x86_64/AMD64/EM64T' and click on the next page on 'Supported products' there you will find all cards that are supported by the driver. If you bound accidentally all cards to VFIO unbind the card you want to use for the Docker container(s) and reboot the server (TOOLS -> System devices -> unselect the card -> BIND SELECTED TO VFIO AT BOOT -> restart your server). docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd: unknown device\\n\""": unknown.: Please check the 'NVIDIA_VISIBLE_DEVICES' inside your Docker template it may be that you accitentally have what looks like a space at the end or in front of your UUID like: ' GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd' (it's hard to see that in this example but it's there) If you got problems that your card is recognized in 'nvidia-smi' please check also your 'Syslinux configuration' if you haven't earlier prevented Unraid from using the card during the boot process: Click Reporting Problems: Please be sure if you have a problem to always include a screenshot from the Plugin page, a screenshot of the output of the command 'nvidia-smi' (simply open up a Unraid terminal with the button on the top right of Unraid and type in 'nvidia-smi' without quotes) and the error from the startup of the Container/App if there is any.
    1 point
  9. DVB-Driver (only Unraid 6.9.0beta35 and up) This Plugin will add DVB Drivers to Unraid. Please note that this Plugin is community driven and if a newer version of Unraid is released the drivers/modules has to be updated (please make a short post here or see the second post if the drivers/modules are already updated, if you update to a newer version and the new drivers/modules aren't built yet this could break your DVB support in Unraid) ! Installation of the Plugin (this is only necessary for the first installation of the plugin) : Go to the Community Applications App and search for 'DVB-Drivers' and click on the Download button (you have to be at least on Unraid 6.9.0beta35 to see the Plugin in the CA App) : Or download it directly from here: https://raw.githubusercontent.com/ich777/unraid-dvb-driver/master/dvb-driver.plg After that wait for the plugin to successfully install (don't close the window with the , wait for the 'DONE' button to appear, the installation can take some time depending on your internet connection, the plugin downloads a custom bzimage with the necessary DVB Kernel modules, the DVB driver itself and installs it afterwards to your Unraid server) : Click on 'DONE' and read the alert message that appears on the top right hand corner and close it with the 'X': You can skip Step 4 if you are want to use the LibreELEC driver package (selected by default) if you want to choose another driver package go to the Plugin itself PLUGINS -> DVB-Driver and choose which version that you want to install and click on 'UPDATE' (currently LibreELEC, TBS-OpenSource, DigitalDevices and Xbox One USB DVB Adapter drivers available) : Reboot your server MAIN -> REBOOT: After the reboot go back to the Plugin page PLUGINS -> DVB-Driver and check if the cards are properly recognized (if your card(s) aren't recognized please see the Troubleshooting section or make a post in this thread but please be sure to read the Reporting Problems section in this post) : Utilize the DVB card(s) in a Docker container: To utilize your DVB card(s) in your Docker container, in this example for Tvheadend, add '--device=/dev/dvb/' to the 'Extra Parameters' in your Docker template (you have to enable 'Advanced view' in the template to see this option) : Now you should see the card(s) in the Docker container: IMPORTANT: If you switch between driver packages a reboot is always necessary! DigitalDevices Notes: (This applies only if you selected the DigitalDevices drivers in the Plugin) If you are experiencing I²C-Timeouts in your syslog please append 'ddbridge.msi=0' to your syslinux configuration (example below). You can also switch the operating modes for the Max S8/SX8/SX8 Basic with the following options: 'ddbridge.fmode=0' 4-tuner mode (internal multi-switch deactivated) 'ddbridge.fmode=1' Quad-LNB/normal outputs of the multiswitch 'ddbridge.fmode=2' Quattro-LNB / cascade outputs of the multiswitch 'ddbridge.fmode=3' Unicable or JESS LNB / Unicabel output of the multiswitch Link to source You also can combine 'ddbridge.msi=0' (but you don't have to if you don't experience I²C-Timeouts) and for example 'ddbridge.fmode=0' here is a short example how to do it: Go to the 'Main' tab and click on the blue text 'Flash': Scroll a little down and append like mentioned above the commands to the syslinux configuration: (As stated above you don't need to append 'ddbridge.msi=0' if you don't experience I²C-Timeouts) Click on 'Apply' on the bottom and reboot your server! TBS-OpenSource Notes: You can also switch the operating modes from the TBS Cards, in this example for the TBS-6909 or TBS-6903-x, if you append one of the following commands to your syslinux configuration (how to is above): 'mxl58x.mode=0' Mode 0 -> see picture below 'mxl58x.mode=1' Mode 1 -> see picture below 'mxl58x.mode=2' Mode 2 -> see picture below Modes: Link to source Troubleshooting: (This section will be updated as soon as someone reports a common issue and will grow over time) Reporting Problems: Please be sure if you have a problem to always include a screenshot from the Plugin page, a textfile or a link to pastebin of the command 'lspci -v' or 'lsusb -v' - depending on the card you are using PCIe or USB (simply open up a Unraid terminal with the button on the top right of Unraid and type in one of the two commands without quotes) and also the output of 'dmesg' in a textfile or a link to pastebin (simply to not spam the thread with the output).
    1 point
  10. Please note: The author of the application is aware that the software needs rework. This is ongoing at the moment, so if you find the application to be a bit hard to work with, maybe you want to be a bit patient and wait for the rework progressing. As it is now, with fiddling the app can be made to work, but it's not very smooth. I still think this application provides a great benefit to the community of doujinshi collectors and am not going to pull the (marked as beta in CAs as before) template for it. Thank you and happy reading! _______________________________________ Hello and I am glad you're stopping by to check out this support thread for my CA template for HappyPandaX. From the application's author: This application itself is released in alpha-stage by the author themselves and this CA template is in beta. I would appreciate if you could help me test this application. There are some known issues, e.g. downloading from Exhentai is limited to being logged in (go into About > Plugins and install the plugins that are available for a stock installation, click on "Open Plugin Site" for the EHentai Login plugin. Follow the instructions and help from there) and having credits for downloading available. As for downloading from other pages, so far no luck for me on nhentai, however scraping metadata for existing doujinshi is working fine for me. You can stage your scans and metadata scrapes before you commit them to your collection. (Add > Scan) GitHub Link Application Author's Patreon, if you wanna support them Documentation The creator also is an artist and links to their various outlets on Patreon and Twitter. Support for this CA in the context of the CA implementation is provided by me, however do keep in mind the application is alpha, the CA beta. Change Log (just the unRAID CA template) 2020 June 1: Initial release to CAs Cheers and happy organizing!
    1 point
  11. Mit Hilfe von @ich777 und @zspearmint freue ich mich, unraid.net auf Deutsch zu enthüllen. 😀
    1 point
  12. I'd buy this service right now. I only go near my Unraid once every few years and waste far too much time trying to relearn stuff. I'd much rather just hand money to someone to solve it for me. If you want some paid support work, please contact me +6421742634 Current problem, can't format a newly installed drive
    1 point
  13. You found APCUPSD conflict with Corsair iCue 👍 Would you provide the exactly hardware info. so other member could quickly identify the problem device next time. Edit : Got it .... Commander Pro
    1 point
  14. It is expected. NUT show realtime power usage ? I don't think so, my first UPS BE-550G-UK won't show that in NUT nor APCUPSD, it is hardware limitation not software issue. I just try troubleshoot why APCUPSD not work.
    1 point
  15. OK Seems relate Corsair iCue, could you try disconnect it ? Oct 31 11:18:19 tower kernel: hid-generic 0003:1B1C:0C10.0001: hid_field_extract() called with n (128) > 32! (swapper/5) https://forum.corsair.com/v3/showthread.php?t=189994&page=6
    1 point
  16. Thank you for sharing that link. I added a question to that thread.. There maybe could be a specific problem with non reference cards or something like that. At least I hope that 🙈 "Doable" sounds pretty good to me 🤪 Have a nice weekend!!!
    1 point
  17. FYI Important Security update to fix potentially vulnerable passwords: https://forum.rclone.org/t/rclone-1-53-3-release/20569
    1 point
  18. Unlikely that any filesystem resize will work due to being damaged, you can resize the partition by creating a new one on top of the existing one, same start sector but using the full disk.
    1 point
  19. Except that every feature added directly to the OS consumes RAM, whether you use it or not. Unraid isn't like pretty much any other OS that runs from a drive, everything is installed into fresh into RAM at boot time. That's why Unraid doesn't include drivers for everything normally supported in linux, because all that space in RAM would be wasted for 90% of users. Every feature that is included in the stock base of Unraid must be carefully evaluated whether or not the increase in RAM required to store the files used is worth it. It's kind of a double whammy, the files consume space on the RAM drive, whether or not you use them. When you run it, it uses RAM as well, but as you said, that part is optional, you don't have to run it. Features added using the docker container system DON'T take up RAM, as those reside on the regular storage drives, and only use RAM when run. As Unraid has added more features, the RAM requirements have increased significantly. As a NAS only, 1GB of RAM was plenty. Currently, even if somebody only needs the NAS functionality, the bare minimum is 4GB, and even that is too tight for some operations. That's because all of the added features take up RAM, even though they aren't used. As time goes on, and the normal amount of RAM in older systems that people want to repurpose as NAS devices increases, then adding features becomes less of an issue. Right now though, we still regularly see people trying to get Unraid to run on 2GB of RAM.
    1 point
  20. I think this is doable but be aware that someone posted already an Github issue that this doesn't work on Kernel 5.8> with an 5700XT... Unraid 5.9.0beta35 is already on Kernel version 5.8.18 and will be upgraded to 5.9.x EDIT: Forget to include the link to the Github issue.
    1 point
  21. ATM I'm using 6.9.0-beta1, every step is appreciated in this direction as it is for all of us. Take your time, I stay on my current version until either the community or unraid itself gets this implemented. BTW, thank you for your provided containers 🙋‍♂️
    1 point
  22. For the utilisazion in Containers it's too old because the driver won't support it. No this should be fine since you can install a older driver inside the VM so that your card will work.
    1 point
  23. Yep, all good and devices configured in Plex
    1 point
  24. An update on this to close it out. After replacing the SATA cable, and switching the SATA port the btrf errors continued after rebuild of the filesystem and docker image multiple times. I removed the suspect SATA SSD drive and went to a single cache drive and errors stopped. I have concluded that that SSD has failed and replaced it. All is restored and back to normal now.
    1 point
  25. Hi ICH777 I've worked it out. Turns out that a lot of the older TBS cards don't like anything other than PCIe Gen1. My server had all slots configured for Gen2. Switched these down to Gen1 in the BIOS and can confirm its all working. Thanks again for you hard work on this.
    1 point
  26. Welcome! I also use OMV on my old N36L. Great boxes...... paid £110 for mine and it's still happily chugging along after nearly 10 years.
    1 point
  27. Could be, maybe my welcome party has given me a bad aftertaste, then again it's just your opinion and we all have one. Having said that, it doesn't make it any less true.
    1 point
  28. Unhelpful, inflammatory, provoking and downright unnecessary. Also, if I was to define the set of values that makes this community group so strong I would say there isn’t a word in your post that would align with them.
    1 point
  29. Sounds pretty similar to my issue with the binhex plex container. After switching to binhex plexpass everything worked fine.
    1 point
  30. Hi! thank you for your fast reply. You are absolutely right!!! That makes sense, because only SATA connected devices are affected. 08.00.0 was my Nvidia 2060 GPU I have added to the vfio config. After the install of the PCI device on the second 16x slot seems to change the IOMMU numeric order of these devices, as the following screenshot shows: Now my GPU has the number 09.00.0 (before 08.00.0). So I changed the config as you mentioned and now it works. Thank you so much for your help!!!! Have a nice day! Best regards, Reini
    1 point
  31. @ljm42 Thanks for your input. Will consider the zero value problem in a future release. I will publish this script only on github if I find the time to write it as a plugin. "frag -f" is sadly nearly useless as described in this post. As an example all my HDDs return a high fragmentation factor although no defragmentation is possible (as they mainly contain huge movies). Maybe it would be possible to calculate the real fragmentation factor by counting files and their sizes and devide them through 8GB while finally comparing this value with the value returned by "frag -f". I need to think about that...
    1 point
  32. As others have mentioned, we are actually growing as a company. When I started out early last year, I was the 4th FTE. We are now almost double that and I can assure you we are all working very hard to get this Stable release out. We also have some exciting things in store so please stay tuned! This has been a tough year for a variety of reasons. Even in the best of times, it's not easy growing as an all remote company but rest assured, we aren't going anywhere and have big plans for the future!
    1 point
  33. Acting uncharacteristically extreme sometimes when we get hurt is very human and understandable and my personal opinion is that is what a few of the fellas at @linuxserver.iodid after the exchanges on the previous thread. I have no reason to doubt the sincerity of this post by @limetech and therefore was hoping there would be some return comms from @linuxserver.io to do their part in repairing this bridge. It might be nice if those who “retired” or decided they were now “out” or were “quitting unRAID work” came back. It might also be nice that if some of the support threads that were locked are now unlocked, some links reinstated where appropriate, edited posts that now read “Depreciated” are replaced with more helpful information and we can start a joint and peaceful transition to the new official unRAID build with appropriate guidance for the community members that might have missed the recent exchanges. Personally, at very least I’d like to see an acknowledgment of this post from the team if they are just not quite ready to move on yet - which is also understandable - wounds don’t often heal overnight. Paraphrasing @aptalca, no one likes a one way street. If virtual hands could be shaken here ... in an ideal world (for me) publicly then the symbiotic unRAID centric relationship between a big community contributor and the company can continue and we all move on together! 🙂
    1 point
  34. I had typed and re-read and re-typed what I said below because I wanted to make sure I said what I meant. I’d like to 100% agree with you on this, but I can’t completely, but I do to a point. You have to remember unRAID is partially open source which allows the community to add on to it. Those addons make unRAID very appealing and more popular than most mass storage options and at one time addons were mostly considered hacks. @limetech has spent 1000’s of hours improving security, improving the interface and implementing many many requests alongside what the community has done. Drivers, new kernels, VM, Docker support and many many other things have been implemented to support all of our needs and for more flexibility for add ons. I’ve heard Tom say time and time again. “User data protection is foremost the most important thing.” Tom put out a really Heartfelt apology and you can tell he really meant it. Don’t believe me try PM him sometime he’s one of the most generous guys I’ve ever chatted with. When I use the word Community I mean everybody from Tom to the newest member who just joined I hope everybody else thinks the same. I like to think of it like this. 4people in the car. 1 Driver making sure we arrive safe. 2 Navigator edging the driver to a destination 3 Backseat passenger are we there yet? 4 I want! I want! I want! All of us fit in at least one many of those seats. As long as we are all willing to ride in the car together there is no where we can’t take unRAID. Anybody have a School Bus handy? 😃
    1 point
  35. Added psutil. Should be able to install using pip3 now
    1 point
  36. You mean like this? Yeah you can install the Ultimate UNRAID Dashboard I developed. It has this functionality and more. You can find it here: https://forums.unraid.net/topic/96895-ultimate-unraid-dashboard-uud/
    1 point
  37. This is true. Unfortunately communicating development plans like this (ad hoc, not centralized) is probably not the best way to go about it. Rest assured that Tom's post here was only the first step. Him and I have been having conversations about how we can better engage with the community beyond just this issue. Expect to see another post from me sometime this week with more details on this. At the end of the day we all realize how vital our community is to our continued success. I just hope that everyone can appreciate how big this community really is now, because managing expectations for such a large audience is a far more daunting task than it was for us back in 2014 (when I started with Lime Technology).
    1 point
  38. @trurl Please lock this thread. Due to an unexpected announcement from @limetech this is no longer required. I'm quitting Unraid work.
    1 point
  39. I think I figured this out from the binhex delugevpn log. https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md See Question 19. I used the new PIA files from the link therre and now NZB get is working again with VPN. PIA must have disabled their old servers. I am getting much better speed now in torrents and usenet. craigr
    1 point
  40. The shutdown option is now available in the version of the Parity Check Tuning plugin i released today. I would be interested in any feedback on how I have implemented it or any issues found trying to use the shutdown feature.
    1 point
  41. I personally like UnFire myself. Certainly better than UnOven, UnCool, UnBBQ, UnLava et al. Either way, this plugin may become the UnSun hero for some people in 2020/2021
    1 point
  42. After unsuccessfully changing the branch from 'latest' to 'dev' and running chmod -R 777 /mnt/user/appdata/pihole/ I found a solution: In my docker config I changed /mnt/cache/appdata/pihole/pihole/ to /mnt/user/appdata/pihole/pihole/ and /mnt/cache/appdata/pihole/dnsmasq.d/ to /mnt/user/appdata/pihole/dnsmasq.d/ All the error message seem to have gone away.
    1 point
  43. I've just realised I have/had the EAC3 'issue'. i.e. any video file with EAC3 audio requiring audio transcode down to 2 channgels (most of my client apps) won't play the file, I get the following log entry: "ERROR - [Transcoder] [eac3_eae @ 0x7e9840] EAE timeout! EAE not running, or wrong folder? Could not read '/tmp/pms-198c89ec-c5fa-4ceb-99dc-409b57434d00/EasyAudioEncoder/Convert to WAV (to 8ch or less)/C02939D8-5F8B-432B-9FD9-6E7F76C40456_522-0-21.wav'" I found a solution in this thread, just deleting the appdata\plex\..\Codecs folder and restart so it recreates it and everything seems fine now! Instead of deleting, I just renamed the folder to "Codecs_OLD" so I could see what the difference was, there are only two differences 1. The licence file has different contents 2. (probably the most crucial!) the "EasyAudioEncoder" file (2.5Mb, no extension) does not have the executable flag set on the old non-working version! I think this happened after the update a day or so ago, or that's when I noticed it! Obviously it's fixed for now, but just wondering if anyone has any idea on how it might have happened in case it comes back at a later date?
    1 point
  44. Any plans for adding support for LTO backup? Speaking as a data hoarder, I know my server has grown to a size such-that a catastrophic failure would be almost impossible to recover from, yet 1:1 backup options are limited IF you want LONG TERM storage. Hard drives don't last for decades in cold storage, and I suspect many of us are only one good power surge away from disaster. A UPS and power conditioner transformer only protect you so far.
    1 point
  45. Turn on help on the cache settings. Cache: No means new files written to the share will go directly to the array, and the mover won't touch files already on the cache for that share What you want is Cache:Yes and run the mover, then make sure no new files are written to the cache by turning it back to Cache:No.
    1 point