stealth82

Members
  • Posts

    172
  • Joined

  • Last visited

Everything posted by stealth82

  1. I had noticed that but thank you for pointing it out. That happened on a new disk I installed just the other day to replace one that started counting pending sectors last week. In conjunction with that I shrank the array by reassigning the pulled drive as a pool drive. So there’s been a lot of moving/copying while parity was also rebuilding. I think I should have gone for a less reckless approach and do things in sequence rather than in parallel… I started the scrub process now. Hope everything will turn out well…
  2. I noticed that if I make use of the pcie_acs_override instruction in the syslinux cfg of the boot key (I need to define pcie_acs_override=id:1022:43c6 to split a particular IOMMU group) and I go under Settings > VM Manager, even without touching anything, I am immediately welcomed by the warning banner that says a reboot is required. If I change any of the settings in this page the pcie_acs_override instruction in the syslinux gets wiped out and I need to add it again before rebooting otherwise I am going to lose it. Can this particular case be handled more gracefully?
  3. OK, I solved the problem: Thank you @Squid 😉, that was the answer!
  4. OK, I believe it's the same problem. It's writing on disk1 because it can't do it in any of the pools. That's why the only-cache type of share can't be created whereas the prefer does; it's able to create it on disk1 of the array while failing to do it on the pool disk but at least it can "deal with that". At the same time shfs - is this the process that shows all the disk folders as one under /mnt/user, right? - isn't able to read data from any of the pool disk directories. And yet the webGUI is able to show it (see attached screenshots).
  5. 😞 I'll keep on trying. Regarding the ssh, yes, it's hardened and root is not password enabled from outside.
  6. Yeah, that's a separate problem that just popped out now as I went through a cycle of stop start array, and there it was, a nice surprise... I recovered the data, though I am not sure something got corrupt..., and reinitialized the drive, I will have to copy data back to it. I tried to change from cache only to cache prefer and I was finally able to create the shares. Only... it's now writing exclusively to disk1 of the array and never to the pool. Moreover I have data in the pool disk that should fall under the user share and it doesn't show anything!! What's going on??? tower-diagnostics-20210515-1853.zip
  7. I'm using unRAID 6.9. I have 2 pools: cache, and lone, each one with only 1 disk. I tried for the first time to create a share (see screenshot) using the cache pool lone, but I have been unable to. This is the error I see in the syslog: May 15 14:16:11 Tower emhttpd: shcmd (1409): mkdir '/mnt/user/public' May 15 14:16:11 Tower root: mkdir: cannot create directory '/mnt/user/public': No medium found May 15 14:16:11 Tower emhttpd: shcmd (1409): exit status: 1 May 15 14:16:11 Tower emhttpd: shcmd (1410): rm '/boot/config/shares/public.cfg' If I create a share using the array disks it just works. I already tried several things, stopping-starting the array, restarting the server, creating the share using the other pool named cache, but nothing. I also search the forum for solutions, found similar threads, but all of them without a useful answer. Why can't I create a pool-only share? tower-diagnostics-20210515-1422.zip
  8. I should have read that post before... Anyway I did the adoption step and all is good now. Thanks for your repeated help
  9. I did let gfjardim's container update to the PRO. Before turning the container off I checked it out and saw that it was upgraded (blue color and Pro written somewhere). After that I turned the container off and followed the readme in the first post of this thread. I changed the host but the problem remains. I guess I didn't read well since I now notice you were supporting transition from your own crashplan home container...
  10. gfjardim's. <authority address="central.crashplan.com:443" hideAddress="false"/>
  11. I used the WEBGUI and I have the same problem. I even dumbed the password down but it keeps on saying the info is incorrect. My account credentials work just fine with the official crashplan pro dashboard. What could the problem be?
  12. Same here. I had 75GB to back up. I started yesterday, it ran form 5 hours. It backed a few GB. This morning I resumed it: it's been 9hrs, I still have 45GB to back up. Unusable!
  13. Is it possible to provide in the network settings a field to define the hostname for eth1 in case it's present and in use?
  14. It worked! Thanks!!! But does that mean that I will have to do that again in case of future docker update or its installation from scratch?
  15. My transmission config has "script-torrent-done-enabled": true "script-torrent-done-filename": "/config/push.sh" The push script makes use of curl but it's been some time now that I noticed the binary isn't to be found in the docker... it used to be there before... If I type docker exec -ti transmission /config/push.sh It returns: /config/push.sh: line 55: curl: not found Was curl recently removed? Or has docker changed with more recent releases of unRAID?
  16. I have a question but it's not strictly related to the docker itself? Hope it's not too off-topic... When Plex docker network is configured as Bridge Plex takes a docker network IP in a range that is obviously different from the hosting unRAID one - e.g. the home network is 192.168.1.* and Plex is under 172.17.0.*. Although mapping the Plex port exposes the server to the home network Plex Server keeps the internal IP as a reference. Now everything seems to be working OK until you want to verify how close your Plex server is: if I'm under the home network I would expect the server to be considered nearby - same local network (that's an info you see if you have an iOS app and check what server you're connected to). Unfortunately the server is labeled as remote and not nearby. This screws up with the way content is served - direct play, profiles, etc... Any suggestion on this? Has this problem been considered before?
  17. I compiled it as a Slackware package, it's not a docker. It stays under /boot/extra , the folder that unRAID looks up for installing additional packages at startup. Then I made some modifications to make the configuration work from boot/config/nginx so that modifications don't need to be copied over to etc/nginx , although I copy nginx.conf at startup
  18. Hello, I'm not sure where to begin, whether this is the right subforum or not but I'll try to be as clear as possible. I have compiled nginx package for slackware since I wanted a reverse proxy that worked even when the array is stopped. Build, installation and configuration had been fine. nginx is working 100%. There's only one problem... the GUI doesn't ask for password!! It just goes straight in. This is how my site is configured: server { listen 8000; listen 4430 ssl; server_name fqdn.mydomain.whatever tower.mylocaldomain.whatever; ssl_certificate /boot/config/nginx/certs/my.pem; ssl_certificate_key /boot/config/nginx/certs/my.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://localhost; } } I can change the proxy_pass directive to this: location / { proxy_pass http://tower; } Still no password asked. But, if I change the proxy_pass directive to this: location / { proxy_pass http://tower.mylocaldomain.whatever; } Then it asks for password. What am I missing? Is it a security bug on unRAID side?
  19. Instead of going the xml modification route, can't append vfio string also be used - as it currently is for PCI pass-through devices - to automatically insert and show through the GUI the identified USB controller?
  20. This made me think that it would be nicer to have a turbo writer option per share... Let's say that I would activate it in case I'm copying movies and TV shows, but not in other cases where the normal files size is relatively small. Maybe it doesn't make sense. Maybe it would make more sense if unRAID could determine the file size that is being copied beforehand and decide, above a certain size limit, to trigger the turbo write mode.
  21. Has anyone tested whether with the new beta releases the problem is still there?