Jump to content

apandey

Members
  • Posts

    461
  • Joined

  • Last visited

Everything posted by apandey

  1. I can confirm that zfs_get_pool_data.lua is what spins up the pool for me. I'll have to probably spend a bit more time to deconstruct it and find the exact step that is the culprit, and see if any caching configuration can help. listing datasets and snapshots with zfs command does not spin up the pool for me
  2. Generally, LSI cards are well supported. There is a list and some recommendations here: https://wiki.unraid.net/Hardware_Compatibility#PCI_SATA_Controllers There is also a thread here to talk about more recent updates to the list
  3. please post diagnostics. would also help if you could tell one share which is having this issue
  4. I use the following in my rsync user scripts on unraid to handle utf8 filenames # ensure unicode filenames are supported export LANG="en_US.UTF-8" export LC_ALL="en_US.UTF-8" export G_FILENAME_ENCODING="@locale" export G_BROKEN_FILENAMES="1" You can probably tweak to your use case. Note the glib variables, it may not just be locale, I can get umlauts work with above setup
  5. You can always reset networking by deleting the network and routes config files from unraid usb (or backup those before major changes)
  6. What are your share permissions? Which user are you using to connect?
  7. That suggests your routing hardware is not up to the task. In any case, if they don't need to be on separate networks, it's simpler
  8. Start with checking container size with built in functionality on docker tab If that's not sufficient, look at these scripts - https://github.com/SpaceinvaderOne/Unraid_check_docker_script In the end, you need to find the container that is writing data without a volume mount and hence growing in size
  9. yes, and i think @jakeisrollin did realise that midway through his shrink attempt, hence the title of this thread wanting to undo new config he did mention he wants to save data if possible, which is why i asked if he can rebuild. all but one drives are available with correct data, the only issue is broken config
  10. isnt it that parity should be valid as of when the data disk broke and since then array has not been started, so parity should be untouched. so the only remaining question is whether missing disk can be added to the array in new config. i think thats the only issue, because if that could be done, this is just a case of a drive dying, then we using parity to emulate it your response indicates there is no way to add back the new disk. in that case, i would assume unraid will lose data from broken disk. however, looking at the damage, its possible you can sleeve in the plastic part and bring the disk back up to try and copy over data. the connection may be flaky, but if no other option, i will give it a try
  11. I managed to do a quick test. running the following did not spin up the pool zpool list zfs list zfs list <dataset> zfs list -r <dataset> zfs list -t snapshot I am not sure I am being exhaustive enough though? Is there a list of commands, or some log where I can observe what the plugin is doing? Or maybe relevant code snippet I can refer to
  12. ok, thanks. next time, when the pool is spun down, I will try running these one by one over ssh. Let me see if I can pinpoint which one of these triggers a spin up (and subsequently what can be done to avoid that). Will report back what I discover
  13. I am seeing the same spin up behaviour. More specifically, the spin up only happens if I visit the Main tab. If I instead go to the dashboard while my pool drives are spun down, I can observe them as spun down in the list of disks on dashboard. It's only when I go to Main that the pool spins up. I have also asked in unassigned device plugin support thread, but it seems that zfs master is the cause. Does this plugin run any zfs commands when the main tab is loaded? Would those commands need to read data off the disks, because if they do, it would spin them up. Can anyone test a spun down zfs pool verified as spun down on dashboard still stays spun down when accessing main tab? I am on unraid 6.11.5
  14. Seems like that is nextcloud default, it logs under data directory. Odd choice If you want, you can change the log location in config.php, I believe it is under appconfig/www/nextcloud/config
  15. It can be some open files - open files plugin can help to identify It can be some file access - file activity plugin can help to identify It can be something trying to list directories - this is a harder one, usually have to try and stop all docker / vm / share clients etc and start one by one to identify. If this is the cause, folder caching plugin can help to solve
  16. You should probably understand how docker works. All the others you see in appdata are not installed there, but rather those directories are mounted into the respective containers at runtime. Containers themslevs are stored inside your docker image file adminer probably doesn't need any state saves for user to see, so nothing is volume mounted. If you want something to be stored outside the container, you will have to find the corresponding container path and mount a host path onto it
  17. You should understand that unraid array is not RAID, but rather a JBOD optionally protected by parity drives As for adding sata ports, the recommended solution would be add a pcie HBA (I have 16 drives running off LSI HBA and remaining 8 running off motherboard sata ports myself). Beware of simple sata port multipliers as they kill the bandwidth and may not be stable enough
  18. The great thing is that unraid boots from a usb and then runs fully from RAM, so it is fairly trivial to create an unraid usb and boot it on your hardware. You can then see what the system detects. Unless you explicitly format drives, simply booting will not touch any existing data on the drives in your system, so it's fairly safe to confirm hardware compatibility Based on the spec you posted, I think it should run fine
  19. I don't believe that should be needed. I have a corsair psu with corsair link connected to a usb header, and it is detected fine without anything special
  20. No, it's a purely disconnected data pool with no unraid functionality directly running off it. I simply keep a backup of my important data there via a rsync user script that runs once in the night to update the data and take a snapshot. Rest of the time nothing accesses the ZFS pool
  21. Indeed, if ports are being exposed using another network appliance, it's not an issue. In fact preferable to portmap at firewall in that case I myself don't expose anything externally and use the reverse proxy on LAN, so prefer to not have one more hop just for port translation.
  22. Docker can be given a device with --device flag, something like --device /dev/ttyUSB0
  23. I use rclone sync to keep a backup of my Google drive
  24. Is your other drive physically damaged or it's just a cable problem? I am not sure why you are removing that drive rather than connecting it back If you shrink array, and the removed drive is not readable, you will have no way to recover the data from the removed drive If you want to recover data, keep configuration and parity intact, start the array and see if the broken drive is emulated and then copy over the data to the remaining drive. Of rebuild the broken data drive by replacing it. Once parity is overwritten, you won't have any of these recovery options
×
×
  • Create New...