peteb83

Members
  • Posts

    5
  • Joined

  • Last visited

Everything posted by peteb83

  1. 2 years down the line and I find myself back looking at a tread I found then! Thought I would jump in again. Firstly if you have found yourself here as you are running out of storage space my suggestion is to look at disk shelf with an IT mode HBA. This sounds complex if you haven't done it but actually it's pretty simple. You add a PCI card, get the right cable and plug it together. As far as you are concerned in the GUI any drive you add to the disk shelf (I have a netapp ds4243) looks like you added it to the original pc. I would advise putting your cache disks and maybe your Parity in the machine and have storage on the disk shelf - minimises traffic through the cable bottle neck. RE the clustering, mu thoughts have changed a little, storage is too complex, I think we should be looking at slave CPUs effectively. So a PC with no used storage (maybe a cache for its own use) and a slave unraid version, that unraid can manage out apps/VMs to, so all the storage management is still on the main machine but if powered on the others can take some/ all the load off. Eg, a download pc for pulling down and processing nzb, or a media box that takes over Plex/emby. Or just a straight forward secondary machine. Any app moved to the slave machine could be proxied to the unraid IP and port. Yeah it's not clustering, but it is to clustering what unraid is to raid. The only exception to the storage thing is I might suggest a backup slave that would spin up to take off site/ second location backups as we know redundancy is not backup!
  2. I have a gen8 DL380p and managed to flash the raid to hba mode with the unraid plugin, and have found that setting my drives to sat i start getting all the temps etc showing up. not sure it would help with the drive issue but might be useful otherwise
  3. So i have managed to compile the kernel with the hpilo in... using the unraid kernel helper and changing the script as described in this post so far i have managed to get a docker with access to /dev to run the ubuntu version of ams but i dont think it had access to the drives as it didnt seem to transfer all the data
  4. I would love this as an option... from what i understand (which in this as all things is very little) if you could identify things that are not where they should be from the include/exclude you could move them to cache and let the mover sort it out... so it would just be a case of scanning then initiating the copy, and maybe monitor cache usage and pause the move at say 90% to let the mover catch up and restart at say 60%.... it doesn't feel that complex but i'm probably wrong.
  5. I could totally see use cases for this... A slave unraid with ssd on say a pi that could be used to access running VMS... Or a resource rich, power hungry monster a nice, quiet unraid master could WOL if it needed the CPU/GPU... Even a deep storage system, so old shares get moved to archive unraid machine that the master can wake to access the storage pool, so it can spin up an entire server 😂