rampage

Members
  • Posts

    44
  • Joined

Everything posted by rampage

  1. If not is there any solution on unraid to utilize this to play video instead of buying another GPU? Thanks.
  2. If anyone wondering why your docker instance is not working / error connecting to the database after installing: You need to manually create the database in your mariadb. The docker container will not do this for you. After you've created the empty database, then in your docker setup, add variable WORDPRESS_DB_NAME = your-database-name
  3. It was formatted by xbox 360 game console thus the partition table might have some special thing in it. Windows can read write the fat32 partition just fine, fdisk might have got confused with those
  4. Can't seem able to mount 2TB USB FAT32 formtted drive: partd -l Model: Seagate Expansion (scsi) Disk /dev/sdg: 2000GB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00B 2000GB 2000GB fat32 fdisk -l /dev/sdg Disk /dev/sdg: 1.82 TiB, 2000398933504 bytes, 3907029167 sectors Disk model: Expansion Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x73696d20 Device Boot Start End Sectors Size Id Type /dev/sdg1 1919230059 6204919772 4285689714 2T a OS/2 Boot Manager /dev/sdg2 544829025 1089655755 544826731 259.8G 65 Novell Netware 386 /dev/sdg3 168653938 168653938 0 0B 65 Novell Netware 386 /dev/sdg4 2885681152 2885734080 52929 25.8M 0 Empty Partition table entries are not in disk order. It was formatted on xbox 360, does it make a difference? On windows it just show as one fat32 partition. I'm able to mount it via shell with mount -t msdos https://ibb.co/RgT0nrW https://ibb.co/h2RTw52 But even mount with this command, I'm not able to make directories with long name , guess such mount method on linux can only do 8.3 naming? Is there an option I could tell mount to support long name just like use on windows?
  5. It is still a good idea. Some people think this is redundant, but more backup is not redundant and in many cases it will come in handy. For example, if you send a set of files to other people, then you can include the par file. You can not send your parity data on your parity drive to other people.
  6. Is this expected or a bug? Only users added via the webui will be kept?
  7. This doesn't seem to work for me, I getting HTTP ERROR 400 while open the webui? Do I have to change anything with the default template?
  8. I have noticed my vm just gone several times, when the system was under heavy cpu useage? It has 8GB ram, the VM use 3GB, it's a windows 7 vm. So I don't think the vm ran out of its ram allocation. Is there anything I can do try to find the cause? I run 6.9.0-RC2
  9. If you have the time to config them one by one in vm? Would having all of those applications running in a vm save memory useage? It seems each docker use more ram then they run in a naked OS.
  10. BTRFS can't hold your swap file, out of luck
  11. When I try to run a 2nd instance of linuxserver's qbittorrent docker, with different config path, different docker tag, different webui port, the webui just report Unauthorized The log doesn't show anything. The 1st instance runs fine. So what have I done wrong? Yes I have tried to putin webui whitelist enabled, it doesn't work
  12. From github https://github.com/shirosaidev/diskover Requirements: Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version Redis 4.x Working steps: (if you do anything wrong, remove the docker, remove the docker's config folder in appdata, (but can keep docker image to avoid download again).) 0. install redis from apps jj9987's Repository, no config needed. 1. Install CA User Scripts plugin Create a new script named vm.max_map_count navigate to \flash\config\plugins\user.scripts\scripts\vm.max_map_count open 'description' file write some readable description about what this script does. open 'script' file, contents of script as follows: #!/bin/bash sysctl -w vm.max_map_count=262144 Set script schedule to At Startup of Array Run the script once. Navigate to "Docker" tab and then the "Docker Repositories" sub-tab in the unRAID webui Enter in a URL of https://github.com/OFark/docker-templates in the "Template repositories" field Click on the "Save" button Click back to "Docker" tab and then click on the "Add Container" button Click on the "Template" dropdown menu and select the Elasticsearch5 image Use pre config, no change needed. 2. go to apps, find diskover, click install put in ip of the redis and elastic server , which should be your unraid ip not 127 or localhost ES_USER : elastic ES_PASS : changeme change appdata path to /mnt/cache/appdata/diskover/ data path I use /mnt/user/ which is going to index everything from the user webgui port I changed to 8081 because I have qBittorrent on 8080 add a new variable, DISKOVER_AUTH_TOKEN value is from https://github.com/shirosaidev/diskover/wiki/Auth-token click start, and you shoud good to go with the webui of diskover, select the 1st indice and happy searching. It might take half a minute for the 1st indice to appear. For the whole process, you do not seem to need to change any folder/file permissions. One problem I got is, while the file index goes to 94.5% it stuck there for hours. So I have to delete the 3 dockers and do it again, this time, it got 100% and seems to be ok. But this also means this setup could have problem like stuck indexing sometime. The OFark's docker template use Elasticsearch 5 which might be a bit old for the current version diskover. Or running from docker caused this. OFark's docker image is a preconfiged working one. If anyone has time, maybe try to build a version 6 or 7 docker image to work with the current version diskover
  13. "No diskover indices found in Elasticsearch" You need to use this Elasticsearch docker image here Not the one from CA/Docker default elasticsearch username password while installing diskover is elastic changeme You will also need to get an auth token for diskover https://github.com/shirosaidev/diskover/wiki/Auth-token
  14. Yes "fail to create symbolic link" does cause problem. It tries to store something/file inside the docker I believe, but you removed the config folder in appdata, made it really confused. But on the other hand, storing data inside the docker image is not a good docker image should do I think. What you need to do to fix this is remove docker's config folder from appdata, also need to remove the docker. You can keep the docker image. This way all the data this docker created will be removed, and you go reinstall this docker.
  15. Thanks, found it. It would be nice for unraid to carry over the setting of how much free cache space required in the global share setting, while such a change takes place, otherwise people might just bust their cache drive
  16. Keep some cache space left in 'Global Share Settings' gone in 6.9-RC2? There used to be a setting here, for keep some space left in the cache disk so it won't reach near 100% but with 6.9.0-RC2, this setting seems to be gone? Is the function deprecated or has moved to somewhere?
  17. does downloaded media file hardlink work with radarr/sonarr running in docker, and the download client say delugevpn runnning in another docker. radarr has docker mapping of: /media to host /mnt/user/media will store movies in /mnt/user/media/movie sonarr has docker mapping of: /media to host /mnt/user/media will store tvshows in /mnt/user/media/tvshow delugevpn has: /media to host /mnt/user/media will download files into /mnt/user/media/download Edit: Ok, this should be the case, my above idea should work fine according to https://trash-guides.info/Misc/how-to-set-up-hardlinks-and-atomic-moves/ Will give a try.
  18. 6.9-RC2 if try to add 2nd or more unraid share/mount tag to vm, can not save/update the vm click the update vm button will not go further vm is a ubuntu server LTS 20.04
  19. It looks like when you are not using vpn and you want to set a incoming port / dht port, setting via the web ui won't take effect or be saved. Need to set via \appdata\binhex-rtorrentvpn\rtorrent\config\rtorrent.rc directly when the docker image is not running, which will be saved and works.
  20. I tried out linuxserver's deluge image because it use the release version 2.0.3 not the dev version with the binhex delugevpn. Some PT only allows up to 2.0.3 because that's the last release version. Also plugins DefaultTrackers-0.2-py3.6.egg and YaRSS2-2.1.4-py3.6.egg can not be installed, I thought it was because binhex delugevpn has python 3.8. I tried linuxserver's which has python 3.6.9, but they still can not be installed. Might be a docker thing, not python version problem.
  21. Tried to install DefaultTrackers-0.2-py3.6.egg and YaRSS2-2.1.4-py3.6.egg, but they can not be installed. Python inside is 3.6.9 should be compatible. Tried both chrome/firefox, all have the problem. Is this a known bug? Is there a workaround to install those two?
  22. I just open the console for this docker and check the inside. the /downloads/complete is there. I think the thing is with linuxserver's deluge docker image, by default the settings for the docker when installing it will ask you to map your real download path in the host to container's /downloads so I'll map my actual-download-path to /downloads, then it will have /downloads/complete /downloads/incomplete inside the docker. but binhex's deluege vpn docker image by default want /data/complete /data/incomplete inside the dokcer. And binhex's radarr sonarr prefer or hard coded to look for downloading / downloaded files in /data/complete, and it reports it can not find /downloads/complete. Which is a wrong report. I guess it just report 'can't find DEFINED_DOWNLOAD_PATH' or something like this , so it wrongly reported to '/downloads/complete' can not be find. here a tricky is : with linuxserver's deluge docker image, just map your real download path to /data again. This way binhex's radarr sonarr will find the downloaded files and won't complain. Or a real fix could be binhex's radarr sonarr don't use hardcoded download path /data/complete to look for downloaded files. But looking in the DEFINED_DOWNLOAD_PATH ?
  23. I have it disabled. When VPN/Wireguard enabled I think the network traffic are managed by the vpn application so the deluge port settings use ramdon ports go through the VPN not the 58946 ports on the docker host machine makes sense. But when you are not using vpn, and the port mapping from the host to the container is 58946 to 58946 port, why we don't put 58946 in the deluge network incoming/outgoing settings but it remains random port? Will it be mapped again by docker, say deluge random port to container 58946 to host 58946?