stealth82

Members
  • Posts

    172
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

stealth82's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. I had noticed that but thank you for pointing it out. That happened on a new disk I installed just the other day to replace one that started counting pending sectors last week. In conjunction with that I shrank the array by reassigning the pulled drive as a pool drive. So there’s been a lot of moving/copying while parity was also rebuilding. I think I should have gone for a less reckless approach and do things in sequence rather than in parallel… I started the scrub process now. Hope everything will turn out well…
  2. I noticed that if I make use of the pcie_acs_override instruction in the syslinux cfg of the boot key (I need to define pcie_acs_override=id:1022:43c6 to split a particular IOMMU group) and I go under Settings > VM Manager, even without touching anything, I am immediately welcomed by the warning banner that says a reboot is required. If I change any of the settings in this page the pcie_acs_override instruction in the syslinux gets wiped out and I need to add it again before rebooting otherwise I am going to lose it. Can this particular case be handled more gracefully?
  3. OK, I solved the problem: Thank you @Squid 😉, that was the answer!
  4. OK, I believe it's the same problem. It's writing on disk1 because it can't do it in any of the pools. That's why the only-cache type of share can't be created whereas the prefer does; it's able to create it on disk1 of the array while failing to do it on the pool disk but at least it can "deal with that". At the same time shfs - is this the process that shows all the disk folders as one under /mnt/user, right? - isn't able to read data from any of the pool disk directories. And yet the webGUI is able to show it (see attached screenshots).
  5. 😞 I'll keep on trying. Regarding the ssh, yes, it's hardened and root is not password enabled from outside.
  6. Yeah, that's a separate problem that just popped out now as I went through a cycle of stop start array, and there it was, a nice surprise... I recovered the data, though I am not sure something got corrupt..., and reinitialized the drive, I will have to copy data back to it. I tried to change from cache only to cache prefer and I was finally able to create the shares. Only... it's now writing exclusively to disk1 of the array and never to the pool. Moreover I have data in the pool disk that should fall under the user share and it doesn't show anything!! What's going on??? tower-diagnostics-20210515-1853.zip
  7. I'm using unRAID 6.9. I have 2 pools: cache, and lone, each one with only 1 disk. I tried for the first time to create a share (see screenshot) using the cache pool lone, but I have been unable to. This is the error I see in the syslog: May 15 14:16:11 Tower emhttpd: shcmd (1409): mkdir '/mnt/user/public' May 15 14:16:11 Tower root: mkdir: cannot create directory '/mnt/user/public': No medium found May 15 14:16:11 Tower emhttpd: shcmd (1409): exit status: 1 May 15 14:16:11 Tower emhttpd: shcmd (1410): rm '/boot/config/shares/public.cfg' If I create a share using the array disks it just works. I already tried several things, stopping-starting the array, restarting the server, creating the share using the other pool named cache, but nothing. I also search the forum for solutions, found similar threads, but all of them without a useful answer. Why can't I create a pool-only share? tower-diagnostics-20210515-1422.zip
  8. I should have read that post before... Anyway I did the adoption step and all is good now. Thanks for your repeated help
  9. I did let gfjardim's container update to the PRO. Before turning the container off I checked it out and saw that it was upgraded (blue color and Pro written somewhere). After that I turned the container off and followed the readme in the first post of this thread. I changed the host but the problem remains. I guess I didn't read well since I now notice you were supporting transition from your own crashplan home container...
  10. gfjardim's. <authority address="central.crashplan.com:443" hideAddress="false"/>
  11. I used the WEBGUI and I have the same problem. I even dumbed the password down but it keeps on saying the info is incorrect. My account credentials work just fine with the official crashplan pro dashboard. What could the problem be?
  12. Same here. I had 75GB to back up. I started yesterday, it ran form 5 hours. It backed a few GB. This morning I resumed it: it's been 9hrs, I still have 45GB to back up. Unusable!
  13. Is it possible to provide in the network settings a field to define the hostname for eth1 in case it's present and in use?