Jump to content

ich777

Community Developer
  • Posts

    15,758
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. If you did it in this specific order the this path should get deleted too, however if you where using the mover it is maybe the case that it didn't move this directory because it's a symlink. Did you maybe stop the array once and started it up again somewhere in between this process? It also creates the directory when the array is started <- I will add a check if the service is enabled to not create it if it is disabled.
  2. What exactly? Can you post your Diagnostics please so that I get a bit more insight what's going on or what is not working? Please also post screenshots and on what Unraid version are you?
  3. I think this is caused by your lancache, but please keep in mind I'm not the maintainer of that template and I'm not that deep into lancache that I know how it works. Maybe a configuration error, I really don't know. LANCache-Prefill nor LANCache won't put much wear and tear on your disks since it only saves data and of course reads it when someone downloads the game. I would also try a real PC and see if the same happens, if yes -> make a post in the lancache support thread, if no -> continue the conversation with all your logs in the GitHub issue that you've already posted or even better create a new issue since it seems not related to that issue. RAID1 is not always better, especially for small chunks like lancache uses.
  4. That's not entirely true. Open up a terminal from the container and depending on the Debian distribution in the network configuration but if you are running for example Bookworm: /etc/systemd/network/eth0.network And do something like that: [Match] Name=eth0 [Network] Address=<STATICIP>/24 Gateway=<GATEWAYIP> DNS=<DNSIP> After that restart the container and it will use that IPs
  5. I wouldn‘t recommend doing it like that, it is always better to set a static IP in the container itself than in the config. Please also specify the distribution that you are using or include your Diagnostics next time please.
  6. No. Did you reboot in between or something like that? This is a symlink and should not prevent you from formatting a drive. I would need a bit more information what you did exactly in the process. EDIT: I've looked now a bit deeper into that and it makes sure that the path exists even if you disable the service but I assume you've deleted everything after you've disabled the service and not before disabling the service, correct?
  7. Are you sure that nothing is in the error.log or access.log from lancache itself? No, these are one occurring while downloading and don't have anything to do with the HDD itself, the HDD can't know that the chunk that you've downloaded doesn't match the CRC. If you experiencing this issues search your log for error or crc However if you are experiencing that even from a PC downloading a game, lancache itself is the cause of the issue. Please don't use lancache on raid1, it will be certainly not fast...
  8. This should not be an issue at all, habe you played around with the sliders for quality and compression? It can get really slow if you set the the quality too high. Have you yet tried to us another computer and see if it‘s the same there too?
  9. I don‘t understand downloading to you PC or through Prefill through thr LanCache? Check the logs, you most likely have some corrupt data in it. Sounds like crc errors and defective chunks. This semms also related to 1) and 2) What do the logs say? Did you also rebuild the entire lancache folder? This indicates your lancache is not working properly. I would strongl yrecommend that you post on the support thread from lancache aince this seems not like a lancache-prefill issue. Also check your access.log from the lancache container but please note that I‘m not the maintainer from lancache itself, I only created a container for lancache-prefill.
  10. That's strange, nothing changed in the container itself, I will investigate further and try it over here.
  11. Sorry for the inconvenience, can't help here much, maybe it's a config or somewhere a left over file from node16.
  12. I tested this on a fresh installed instance and it started fine. Do you maybe have many modules installed where one is not compatible with node18? Have you yet tried to enable force update?
  13. If it is working in your local network then something with your port forwarding is wrong, please double check your forwarding. Please do a pure NAT forwarding of the ports so to speak allow all connections to these ports from outside and point them to your local Unraid IP (or container IP).
  14. I just looked into my Dockerfile and unrar is part of the container? Can you maybe post a screenshot of the exact error?
  15. Please head over to the GitHub repository from Stonyx and create an issue there: https://github.com/Stonyx/QNAP-EC/issues since the plugin is based on his module.
  16. Thanks for the report. Will update it ASAP and let you know. Update done, tested and working over here.
  17. On what hardware do you run the container? Do you run it from your local network or do you use a reverse proxy?
  18. Just let me know when a new game comes out next time so you don‘t have to hack the container, I‘m not always up to date what is released when and I try to create a container if I have the game in my library.
  19. Then go to this tab and click Create Container from Backup: In the following window make sure to enter the same name and click create (this can take some time): If that doesn't work you can also do that in your case from the command line with: lxc-autobackup -r DNS DNS (the first "DNS" is the Backup name and the second one is the new container name)
  20. I'm not 100% sure about that but could be... How did you create the backup? With the integrated Backup function? If yes, try to restore it with the same name. Did you enable the global backup?
  21. You set the lxc share to use the cache which means that when the mover kicks in the files will be moved to the array and not stay on the cache, that's most likely what causes your issue. Set the LXC share so that the primary storage is the cache and that it has no secondary storage. I see this in your Diagnostics which is ultimately wrong: # Share exists on cache, disk1 ... shareUseCache="yes" ... The first line means that it exists on disk1 and cache which is one part of the issue and the second one indicates that the mover moves all the files to the array, it should be "only" instead of "yes" (BTW this is something you must have set at some point, by default it is created to stay on the cache). This is what it should look like:
  22. As the container name implies this is the server component not the web client so that you can establish connections between your clients through your server. Sorry the web client is not part of my container and will never be.
  23. Are you sure you where using my container? I have to take a look if unrat is installed in my container, I‘ve not changed the packages that are included in my container.
  24. The following should work: Install Dialog since it does not ship with Unraid wget -O /tmp/dialog-1.3_20211214-x86_64-1.txz ftp.linux.cz/pub/linux/slackware/slackware64-15.0/slackware64/a/dialog-1.3_20211214-x86_64-1.txz installpkg /tmp/dialog-1.3_20211214-x86_64-1.txz rm /tmp/dialog-1.3_20211214-x86_64-1.txz Download and make the script executable: cd ~ wget https://github.com/suykerbuyk/disk-helpers-scripts/raw/d1c58af8fe445a7adb2dd351468aecbe4acab6da/exosx2_zfs_ops.sh chmod +x exosx2_zfs_ops.sh Finally run the script: ./exosx2_zfs_ops.sh This should bring up a menu that you can navigate through. DISCLAIMER: Please note that these scripts are marked as experimental so to speak, if you break something it's up to you...
  25. Do you plan on using SATA or SAS devices? This script is only for ZFS and SAS drivers, as far as I can tell SATA can not represent two drives over one connection so you would need to create two separate partitions where you don't have the same speed bump over SAS. This script should even work fine for Unraid as far as I can tell.
×
×
  • Create New...