tigga69

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by tigga69

  1. at the moment I have two solutions neither of which Im happy with and want to post. 1. I have xrtras running on my PC - yeh it defeats the purpose of running on a server 2. I got a version working using docker compose but its a pain to keep updated! when I have spare time, I'll publish my working answer,
  2. THought I would share the answer as I needed to play to get it working... if you are going to use NVIDIA dont forget the runtime=nvidea setting for extra parameters The network can be bridge setting I choose to use br0 Important thing is to get to the files which are in config and user. Because I used sillytavern on the desktop initially I know what files to edit or where to add various bits. (I would recommend doing this to get familiar with sillytavern files and setup) Hope it helps someone, because it took me a few hours to work it all out reading the docker files!
  3. tigga69

    ZFS simplified

    ok thanks thats clearer So I build a single drive array, a cache, and a zfs pool (which has most of my storeage capacity). I've the Plus licenses so Im ok with the drive numbers and oddly I have a spare small SSD lying around, so that's ok too. I'll be building the system this weekend (I think) so I can see what it all looks like. Again thanks for the clarity.
  4. tigga69

    ZFS simplified

    HI Foo_fighter... not sure I understand your response. Why???? What size??? How should this be configured? Is this for Unraid as parity, or ZFS pool? If I use 6+2 for ZFS, then why would I need an extra drive. I have an M.2 for the cache. Also I still have 2 spare SADA ports, so I can add more drives, but why would I? As I said Im after the highest resiliance, so no I dont want to loose the self-repair. The whole point is maximum resiliance which is erring to ZFS as RaidZ2. ie 6+2 in the pool. Option1: ZFS with RaidZ2 - 6+2 drives: Pros:self repair, performance Cons: high power usage, high heat from drives, high wear as all drives spin Option2: ZFS like normal unraid - 6ZFS single drive pools + 2 parity: Pros: low power, heat, and wear. Cons: no self repair, slow performance. Option3: ZFS with Raid0 - 6 drives striped ZFS pool without fault tollerance but with 2x Unraid paritydrives: I suggested this but it wouldnt work as the ZFS pool is larger than the parity drives. can only be done as option 2 unless large parity drives purchased. Option4: 2x RaidZ1 pools - 4 4x drive pools, joined in Unraid without parity ie 3+1 ZFS pool & 3+1 ZFS pool, Pros, stripping across 4 drives improved performance over option 2, maintains self repair, lower drive wear, lower heat, lower power (less drives spining). Cons: lower performance, lower resilience (1 drive failure). As I write it this way, I am still convinced it is either option 1 or option 4. Option 2 doesnt makes sense for my use case. (and option 3 doesnt work) You appear to be suggesting either I configuration I dont understand, or option2 which doesnt meet my needs. This is just a variant on today's setup with XFS but using ZFS with minimal benefit.
  5. Ditto... Im seeing the same thing
  6. tigga69

    ZFS simplified

    I've been reading everything I can about ZFS and watching the videos from spaceinvader and others, but Im a little confused about things and what to do! Im building a new server to replace an old one I cant expand any more. So I was going to take advantage of ZFS. I will be using 8x 4TB drives, ie they are the same capacity. In doing this, am I better off doing RAIDZ2 or XFS with 2 parity? ZFS gives me compression which gives more capacity; I should also get higher read/write speed (I dont have CPU issues as it is a new server dedicated to running Apps in dockers); I can do snapshots to a backup machine. Im assuming that using RaidZ1 means the parity is spread across the drives so I dont have the performance dependant on the single parity used by Unraid, this will also mean less wear on a single drive used for parity due to less read/writes on the single drive. XFS gives me easy capacity increase (but Im at max capacity in an N2 case with 8 drives) and less power as I only spin up the drive I am using.; less heat as Im in a small case with limited cooling; What I am after is the highest resilience and good performance; I dont care about power consumption or processing impact. I naturally think I want to use RaidZ2. But is this correct? Should I build the new box as a pure ZFS with RaidZ2, or as XFS with 2 parity, or some hybrid like ZFS striping with 2 parity, or RaidZ1 and 1 parity? Another option would be 2 pools of 4 drive RaidZ1, so only half the drives spin up at any time when using them. This then gives both speed benefits and also power and heat benefits. I should also add I will have a 1TB M.2 for the cache. what is the reason not to use ZFS? and what config do people suggest and why? (the why is important!) Its all new on UNRAID, and parity vs zfs makes it a little complex in the choices, and I can see loads of opportunities to get it right or make a pigs ear of the initial config! Thoughts please.
  7. Hi Ive got 2 servers Im rebuilding and I want to migrate to ZFS... but to do that without dataloss, Im going to need to use a method that requires using the RAIDZ expansion. Any idea (roughly which quarter) we expect to see this functionality in Unraid? thanks
  8. thanks to Kixsume for the starting points. https://github.com/oobabooga/text-generation-webui/issues/4850
  9. Just an FYI... the tensorrt_models.sh is out of date and doesnt work any more. Need the line to install packages changed to... # Install packages pip install --upgrade pip && pip install onnx==1.15.0 protobuf==4.25.0
  10. yes I needed to change the M.2 socket to the primary one.
  11. So rather than a $40 coral tpu, I should put in a $1000 gpu into my unraid server??? hmmm not sure I agree with that approach! Using desktops for tensorflow is a different requirement to a server, and like the OP (and many others) I have a requirement to take advantage of a photo library dating back 25 years of family, friends, holidays etc. and would love to use AI to find old photos in a different way. I've just started using photoprism because I cant find an alternative. When it works, it is amazing... find photos with person A and person B on a beach. and it gives you a selection of photos. The issue I have is I have over 50K photos, but not all are detecting faces and matching. However, I just read today a note on one page saying I read the section on the background worker to imply that you have to wait a long time for it to process all the faces. The Point the OP is really asking is that CPU tensorflow can process maybe 3 frames a second whereas coral TPU would process 100. This means you could manage my library in hours rather than days. and I can increase the people and object I want to recognise regular. Without Coral, I could spend a year sorting the library to do facial recognition.
  12. @ich777 weird!! Important thing is it works now and it was really nothing more than a slot change from when I first got it. All the other stuff was already set correctly. It seems the slot does matter in my case. So Its one which is useful to know for future reference. i have to admit it was a surprise, but it does sort of make sense that the slots may act incorrectly. I agree it is a manufacturers implementation of the HW and bios, but I've not got the inclination to try to diagnose further.
  13. @ich777 thnaks for the feedback. I actually solved the problem, before your last reply. Its a Doh thing and pretty obvious when you sit and mull over the problem. So Frigate has been running fine on the coral now for several days. I ended up using the cheap adapter card and sending back the expensive one. https://www.amazon.co.uk/gp/product/B07TK9KMT5/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1 So I can confirm both adaptor cards work including the StarTech.com PCI Express to Mini PCI Express Card Adapter. Important notes as Ich777 pointed out, you need to check you have above 4G enabled in the bios and resize BAR is enabled. The most important thing... even though it is a short slot single lane adapter card, put it in the full size slot!! The short slot PCIe X1, in my case, does not support the resize BAR functionality, only the two PCIe X16 slots do. Given this is a Unraid server and whilst I have a cheap graphics card, it is used as a headless device (ie no monitor attached) It is just a matter of arranging the card in the right order for my config. My SATA expansion card moved into the PCIe x1 so it wasnt an issue for me. Thanks for all the pointers and thought, and to anyone else who cant get the USB, or the M.2 Coral cards, the PCIe Mini with a cheap adapter card put in the right slot, with the Ich drivers works fine. Perhaps if I had sat and thought about it properly from the start I would have had it working in minutes, rather than days, so I hope this helps someone.
  14. Right! I fixed it!!! I just tried a different slot... which was obvious really. I was using the PCIe X1 slot and then I moved it to the PCIe X16 slot. I wasnt using the PCIe X16 because the hd controller board was in that slot! Big Doh! from me. Shout out to Brian H. because it was his post that gave me the clue.
  15. @ich777OK so i ordered and installed a more expensive StarTech PCIe to Mini PCIe adaptor card and have the same problem. Driver doesnt see a PCIe card, and as you said the route of the problem is the Above4G not working. This is seen by the error "Cannot get BAR2 base address" So the coral card is recognised on the bus, but the driver cant find it to address it. The motherboard is a Gigabyte b550m DS3H with F/w F15a from march 2022, there is a newer version but I know that wont make any difference, so I havent flashed it yet. Above4G decoding is enabled and Re-size BAR support is set to Auto (the other option is disabled) Ive also left the PCIEX16 bifurcation to Auto rather than setting the lanes to a fixed setting. Any pointers you can provide to help??? Im kind of at a loss because I've never seen this type of problem before. thanks in advance. R. server-ur1-diagnostics-20221108-2245.zip
  16. @ich777 everything makes sense, and your comment about the driver makes sense (which is why I was checking) I double checked Bios and I did have Above 4G enables (I did think I had already!). I guess that it it more likely the cheap adapter card I used which was for wifi adapter. I know adapter cards are problematic, I just expected the coral card to not even be recognised. I'll swap that out and report back in a few days (once it turns up) Thanks again for the pointers.
  17. Hi I need a little help which might just be clarification. I've got a Coral mini PCIe board. I've installed it OK into the hardware and it is seen by the PCIe bus. (see image below) The apex and gasget drivers are installed. (see below) BUT the apex drivers dont seem to recognise the Coral card and so it doesnt load for frigate. The drivers from @ich777 are just saying "No PCI based Coral TPU Devices found!" (which isnt a surprise because I can see the Apex driver isnt recognising the card. So people in 2020 were saying you need to recomile the Apex and Gasket drivers. Question 1. Do we still need to compile new Apex and Gasket Drivers? I got most of the way through and lots were not working because so many bits needed extra install to work, so I thought I would ask... which is the best instructions to follow for the build? is it pgbtech on or DiabloXP on or Jaburges on some other combintion. (yes I know the build number needs to be the latest, but I need to know which sequence to follow!) thanks in advance for any advice, pointers, or direction. R.
  18. did anyone solve this.... Basically Im moving from smarththings to home assistant. There is only one thing left for me to do with Home Assistant before I throw away smartthings! That's issue a voice command to instruct HA to do something. I dont want to pay the monthly subscription to use allow HA to work with amazon, I want to keep things local rather than via the cloud. There are several palettes in node-red, but they require you to listen on port 80, and I cant work out how to listen on port 80. Can someone give an indication on how this would work on node-red. Thanks
  19. Hi OK Im a noob!! I decided over easter to finally use that spare hardware and migrate my old HP proliant microservers running windows server and use Unraid on a custom built server. Brilliant and much easier to use and configure. One of the reasons was to also migrate from Smarththings to HomeAssistant. Both are a great move and have proven to be uneventful and most stuff is working or mid migration. I used the opportunitiy to build face regognition on the video steams for security reasons. Ive happily got Frigate, Double Take and Deepstack working and Im able to trigger things, but Im not happy with deep stack and want to use Compreface. I've tried to install compreface, but keep hitting problem depending on how I install the docker. I've tried Corgan's template and I basically get a server error when I try to run (I dont even get a log out of the thing) With the default integration I get it to load and it crashes every few seconds and goes into a startup loop. (various places) Has anyone actually got the Compreface docker installed and working on an AMD ryzen 3600X and if so can they give some pointers on getting compreface installed and running. I can add a log file if necessary but just wanted to start with basic pointers first.