tigga69

Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

tigga69's Achievements

Noob

Noob (1/14)

1

Reputation

  1. at the moment I have two solutions neither of which Im happy with and want to post. 1. I have xrtras running on my PC - yeh it defeats the purpose of running on a server 2. I got a version working using docker compose but its a pain to keep updated! when I have spare time, I'll publish my working answer,
  2. THought I would share the answer as I needed to play to get it working... if you are going to use NVIDIA dont forget the runtime=nvidea setting for extra parameters The network can be bridge setting I choose to use br0 Important thing is to get to the files which are in config and user. Because I used sillytavern on the desktop initially I know what files to edit or where to add various bits. (I would recommend doing this to get familiar with sillytavern files and setup) Hope it helps someone, because it took me a few hours to work it all out reading the docker files!
  3. tigga69

    ZFS simplified

    ok thanks thats clearer So I build a single drive array, a cache, and a zfs pool (which has most of my storeage capacity). I've the Plus licenses so Im ok with the drive numbers and oddly I have a spare small SSD lying around, so that's ok too. I'll be building the system this weekend (I think) so I can see what it all looks like. Again thanks for the clarity.
  4. tigga69

    ZFS simplified

    HI Foo_fighter... not sure I understand your response. Why???? What size??? How should this be configured? Is this for Unraid as parity, or ZFS pool? If I use 6+2 for ZFS, then why would I need an extra drive. I have an M.2 for the cache. Also I still have 2 spare SADA ports, so I can add more drives, but why would I? As I said Im after the highest resiliance, so no I dont want to loose the self-repair. The whole point is maximum resiliance which is erring to ZFS as RaidZ2. ie 6+2 in the pool. Option1: ZFS with RaidZ2 - 6+2 drives: Pros:self repair, performance Cons: high power usage, high heat from drives, high wear as all drives spin Option2: ZFS like normal unraid - 6ZFS single drive pools + 2 parity: Pros: low power, heat, and wear. Cons: no self repair, slow performance. Option3: ZFS with Raid0 - 6 drives striped ZFS pool without fault tollerance but with 2x Unraid paritydrives: I suggested this but it wouldnt work as the ZFS pool is larger than the parity drives. can only be done as option 2 unless large parity drives purchased. Option4: 2x RaidZ1 pools - 4 4x drive pools, joined in Unraid without parity ie 3+1 ZFS pool & 3+1 ZFS pool, Pros, stripping across 4 drives improved performance over option 2, maintains self repair, lower drive wear, lower heat, lower power (less drives spining). Cons: lower performance, lower resilience (1 drive failure). As I write it this way, I am still convinced it is either option 1 or option 4. Option 2 doesnt makes sense for my use case. (and option 3 doesnt work) You appear to be suggesting either I configuration I dont understand, or option2 which doesnt meet my needs. This is just a variant on today's setup with XFS but using ZFS with minimal benefit.
  5. Ditto... Im seeing the same thing
  6. tigga69

    ZFS simplified

    I've been reading everything I can about ZFS and watching the videos from spaceinvader and others, but Im a little confused about things and what to do! Im building a new server to replace an old one I cant expand any more. So I was going to take advantage of ZFS. I will be using 8x 4TB drives, ie they are the same capacity. In doing this, am I better off doing RAIDZ2 or XFS with 2 parity? ZFS gives me compression which gives more capacity; I should also get higher read/write speed (I dont have CPU issues as it is a new server dedicated to running Apps in dockers); I can do snapshots to a backup machine. Im assuming that using RaidZ1 means the parity is spread across the drives so I dont have the performance dependant on the single parity used by Unraid, this will also mean less wear on a single drive used for parity due to less read/writes on the single drive. XFS gives me easy capacity increase (but Im at max capacity in an N2 case with 8 drives) and less power as I only spin up the drive I am using.; less heat as Im in a small case with limited cooling; What I am after is the highest resilience and good performance; I dont care about power consumption or processing impact. I naturally think I want to use RaidZ2. But is this correct? Should I build the new box as a pure ZFS with RaidZ2, or as XFS with 2 parity, or some hybrid like ZFS striping with 2 parity, or RaidZ1 and 1 parity? Another option would be 2 pools of 4 drive RaidZ1, so only half the drives spin up at any time when using them. This then gives both speed benefits and also power and heat benefits. I should also add I will have a 1TB M.2 for the cache. what is the reason not to use ZFS? and what config do people suggest and why? (the why is important!) Its all new on UNRAID, and parity vs zfs makes it a little complex in the choices, and I can see loads of opportunities to get it right or make a pigs ear of the initial config! Thoughts please.
  7. Hi Ive got 2 servers Im rebuilding and I want to migrate to ZFS... but to do that without dataloss, Im going to need to use a method that requires using the RAIDZ expansion. Any idea (roughly which quarter) we expect to see this functionality in Unraid? thanks
  8. thanks to Kixsume for the starting points. https://github.com/oobabooga/text-generation-webui/issues/4850
  9. Just an FYI... the tensorrt_models.sh is out of date and doesnt work any more. Need the line to install packages changed to... # Install packages pip install --upgrade pip && pip install onnx==1.15.0 protobuf==4.25.0
  10. yes I needed to change the M.2 socket to the primary one.
  11. So rather than a $40 coral tpu, I should put in a $1000 gpu into my unraid server??? hmmm not sure I agree with that approach! Using desktops for tensorflow is a different requirement to a server, and like the OP (and many others) I have a requirement to take advantage of a photo library dating back 25 years of family, friends, holidays etc. and would love to use AI to find old photos in a different way. I've just started using photoprism because I cant find an alternative. When it works, it is amazing... find photos with person A and person B on a beach. and it gives you a selection of photos. The issue I have is I have over 50K photos, but not all are detecting faces and matching. However, I just read today a note on one page saying I read the section on the background worker to imply that you have to wait a long time for it to process all the faces. The Point the OP is really asking is that CPU tensorflow can process maybe 3 frames a second whereas coral TPU would process 100. This means you could manage my library in hours rather than days. and I can increase the people and object I want to recognise regular. Without Coral, I could spend a year sorting the library to do facial recognition.
  12. @ich777 weird!! Important thing is it works now and it was really nothing more than a slot change from when I first got it. All the other stuff was already set correctly. It seems the slot does matter in my case. So Its one which is useful to know for future reference. i have to admit it was a surprise, but it does sort of make sense that the slots may act incorrectly. I agree it is a manufacturers implementation of the HW and bios, but I've not got the inclination to try to diagnose further.
  13. @ich777 thnaks for the feedback. I actually solved the problem, before your last reply. Its a Doh thing and pretty obvious when you sit and mull over the problem. So Frigate has been running fine on the coral now for several days. I ended up using the cheap adapter card and sending back the expensive one. https://www.amazon.co.uk/gp/product/B07TK9KMT5/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1 So I can confirm both adaptor cards work including the StarTech.com PCI Express to Mini PCI Express Card Adapter. Important notes as Ich777 pointed out, you need to check you have above 4G enabled in the bios and resize BAR is enabled. The most important thing... even though it is a short slot single lane adapter card, put it in the full size slot!! The short slot PCIe X1, in my case, does not support the resize BAR functionality, only the two PCIe X16 slots do. Given this is a Unraid server and whilst I have a cheap graphics card, it is used as a headless device (ie no monitor attached) It is just a matter of arranging the card in the right order for my config. My SATA expansion card moved into the PCIe x1 so it wasnt an issue for me. Thanks for all the pointers and thought, and to anyone else who cant get the USB, or the M.2 Coral cards, the PCIe Mini with a cheap adapter card put in the right slot, with the Ich drivers works fine. Perhaps if I had sat and thought about it properly from the start I would have had it working in minutes, rather than days, so I hope this helps someone.
  14. Right! I fixed it!!! I just tried a different slot... which was obvious really. I was using the PCIe X1 slot and then I moved it to the PCIe X16 slot. I wasnt using the PCIe X16 because the hd controller board was in that slot! Big Doh! from me. Shout out to Brian H. because it was his post that gave me the clue.