Jump to content

manofoz

Members
  • Posts

    87
  • Joined

  • Last visited

Posts posted by manofoz

  1. 3 hours ago, JorgeB said:

    container

     

    Thanks for the reply! Are you reffering to this thread? 

     

    Good to know this isn't my bottleneck. My parity checks and disk rebuilds/upgrades are taking two days. Maybe it's just that I have some really slow drives in the array.

     

    image.png.c7234df37f622385b83f7322fe1a4c6e.png

     

    I will replace disk 6 next which benchmarked @ 160 MB/s while Disk 11 on the same controller got 270 MB/s. I just can't imagine how any of this is suppose to work with 50TB disks hit the market. I guess people will have to either deal with 7 day parity checks or move to another platform. 

  2. Hello,

     

    I have two LSI 9207-8i's running for my 16 HDDs. I installed DiskSpeed and saw that one of these was "downgraded" and it's drives benchmarks were clearly slower than the other one:

     

    Fast Controller:

    image.png.75f2ab49784a62cf8069ca49a0636bf1.png

     

    image.png.275f53e4eb4aaace7fef7e8376a7e859.png

     

    Slow Controller:

    image.png.4192059479da857388de72e22682b09e.png

     

    image.png.07ac7a3fd0e8f95cce07d943dd104872.png

     

    I did some digging and found this command which elaborated more on my issue:

     

    lspci -vv
    ...
    08:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
    
    ...
    
                    LnkSta: Speed 8GT/s, Width x4 (downgraded)
                            TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

     

    Width x4, I guess the card is for x8 though I did not realize that at the time of purchasing it. The Manual clearly states it supports up to x8 so I am losing some speed but it's hard to tell how much. 

     

    Here are what options I'm thinking but I'd love a second opinion: 
     

    • Live with half my drives being slower in a way that is hard to qualtify 
      • The same Exos X20 drive benchmarked at 270 MB/s on the x4 card and at 280 MB/s on the x16 card so it doesn't seem that bad
    • Get something like StorageTekPro STP-9300-16i which is also x8 for 16 drives (dunno how that adds up) and use the true x16 PCIE slot
      • This thing looks like it runs even hotter than the ones I have which I mounted fans on. Would set me back $$ too. 
    • Get some PCIE to SATA and use that + my onboard SATA's 
      • Seems like a cheap option but then I'd have more of a mess of cables than I do already

     

    My motherboard supports bifurcation to go from x16 to x8/x8 but I think that means set a single slot into an x8/x8 mode which doesn't make much sense to me. If I could split the 16 lans across both 16 lenght slots it would be an easy fix but I don't think I am that lucky. 

     

    Thanks!

  3. OK I found some more information:
     

    lspci -vv

     

    The OK one is reporting x8, I assume this is 8 lanes for PCIE:
     

    01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
    
    ...
    
                    LnkSta: Speed 8GT/s, Width x8
                            TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

     

    The other is reporting x4 as downgraded:

     

    08:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
    
    ...
    
                    LnkSta: Speed 8GT/s, Width x4 (downgraded)
                            TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

     

    I bought these cards: https://www.amazon.com/dp/B0BVVDT4F1?psc=1&ref=ppx_yo2ov_dt_b_product_details

     

    Looks like they are PCIE-x8, I was under the assumption they were x4.

     

    image.png.561894f8ec138f17d0d7e017dbae1c71.png

     

    I am using this motherboard: https://www.amazon.com/dp/B0BG7DY6MT?ref=ppx_yo2ov_dt_b_product_details&th=1

     

     

    It's second x16 slot is running x4 lanes:

    image.png.d61879dc2a13aa720560e7b2747dc7d1.png

     

    So maybe I am out of luck. Or maybe I can run bifurcate the x16 slot into x8 and x8 and get out of degrade mode? 

     

    I see this in the motherboard manual. Looks like it's intent is if you want to run PCIE M.2 controllers but this may be simular, think it's worth a shot? 

     

    image.png.9f0059db47dc7c2237183f388d63e27e.png

     

  4. Sorry if this has been asked before but I couldn't find anything on the intro page or by googling. What does it mean for a controllers throughput to be reported as "downgraded":

     

    One controller is "ok":

    Quote

    Broadcom / LSI
    Serial Attached SCSI controller

    Type: Onboard Controller
    Current Link Speed: (ok) width (ok) ( max throughput)
    Maximum Link Speed: 8GT/s width x8 (7.88 GB/s max throughput)


    Oner controller is "downgraded":

    Quote

     

    SAS2308 PCI-Express Fusion-MPT SAS-2


    Broadcom / LSI
    Serial Attached SCSI controller

    Type: Onboard Controller
    Current Link Speed: (ok) width (downgraded) ( max throughput)
    Maximum Link Speed: 8GT/s width x8 (7.88 GB/s max throughput)

     

     

    I am mainly strugglines with very long disk rebuild times. I just swapped a 4TB out for a 20TB and it rebuilt around 160 MB/s which took about two days. I have some drives that I know are slower than others but I want to make sure my controllers are setup and cooled properly so that they are not the bottleneck. Everything is pulgged into PCIE-x16 on board slots (one is the GPU slot and the other is an x4) with mini SAS to SATA cables.


    Benchmarks for "ok" controller (the 20TB drives are parity, parity2, disk1 and dis2):

    image.png.1a4ea2251fc7586ac0f07b22afd7813e.png

     

    Benchmarks for the "downgraded" controller (the 20TB drive is Disk 11, slower than what we saw with the "ok" controller but not terrible. However overall the drives here are much slower):

    image.png.00be6b1051f4bbbd8c1aee77cd668571.png

     

    Thanks!

     

  5. Hello,

     

    I'd like to be able to run my server headless but I can only seem to get clean reboots a small percentage of the time. Sometimes things don't shut down at all and get stuck on locked mounts or files which I made a seperate post on but didn't get any traction there: 

     

    That only happened once and it's my main concern. What's happening consistantly is the USB drive isn't booting properly. It's either landing on "not automatically fixing this" logged to console, going to a black screen and doing nothing, or not showing up as a boot drive at all. Having to rely on this inconsistly flaky startup procedure is a good cause for anxiety as a lot rides on the flash drive working. I have cloud backups enabled and some manual backups saved just incase it doesn't come back one of these times.

     

    To make matters worst, sometimes it boots fine. When it gets stuck on a black screen I am usually able to go into bios the next boot and select the flash drive and it comes up alright. The console "not automatically fixing this" errors have been a bit harder to deal with. When I get those I usually switch the USB plug the drive is in until they go away. Last time one of the boots didn't even detect a USB as a boot device until I put it into the frot port. Which port it likes seems to be random. Does't help that my server has three types of USB A ports (3.2 Gen 1, 3.2 Gen 2, and 2.0 on the chassis). 

     

    I am using this flash drive https://www.amazon.com/dp/B07BPGF6N3?ref=ppx_yo2ov_dt_b_product_details&th=1. I am thinking my only next option here is to replace the flash drive as i have tried everything else. Does anyone have a recommendation for a flash drive that would be a better choice? When I did my initial research this was the one I found.

     

    To create a new flash drive boot device it sounds like I just need to restore the backup via the usb creator: https://docs.unraid.net/connect/help/#restoring-a-flash-backup. That's easy enough, nothing additional to worry about with the license key? 

     

    If I've got to use USB2.0 ports than I would prefer to plut it in internally as I only have USB2.0 on the front of the cassis, the motherboard is all USB 3.2's. I would probably use something like this if heat's not an issue:

     

    image.png.65cd39758fb5f3087efee3754aacd707.png

  6. On 10/1/2023 at 9:11 PM, trurl said:

    Shrinking is for when you want to end up with fewer disks, usually with the data from the removed disk already copied to other disks in the array. It definitely is not faster.

     

    And there is no way to replace your drives all at once unless you want to start again with empty drives.

     

    Your first sentence says you want to replace, so upgrade is what you want to do. You could do 2 at once with dual parity, but simpler and safer to do one at a time as needed for capacity.

     

    I always upsize my disks since I have no room for more disks, and I have done it many times. Replacing disks is the whole reason you have parity. Not risky at all if your array is working well. Just keep the original disk intact until you are happy with the rebuild.

     

    In any case, you must always have another copy of anything important and irreplaceable. Parity is not a substitute for backups.

     

     

     

    Upsizing is in progress but man it's a lot slower than I expected. For one 20TB drive which replaced a 4TB drive with 1TB of data on it we are looking at 40+ hours! 

    image.png.64955e0efc93f497434bc592e2bd371c.png

    The drive is an Exos X20 specced at 280 MB/s, same as my parity drives. I first suspected my 5400 rpm drives were slowing the entire process down since everything was being read from at the same speed but those are only the other 4-6TB drives and it has since stopped reading from them, probably since it's past 6TB, and it didn't get any faster.

     

    Just curious how this will work once we get 50TB hard drives. Will take a week to run partiy or replace a drive. 

  7. 2 hours ago, trurl said:

    Shrinking is for when you want to end up with fewer disks, usually with the data from the removed disk already copied to other disks in the array. It definitely is not faster.

     

    And there is no way to replace your drives all at once unless you want to start again with empty drives.

     

    Your first sentence says you want to replace, so upgrade is what you want to do. You could do 2 at once with dual parity, but simpler and safer to do one at a time as needed for capacity.

     

    I always upsize my disks since I have no room for more disks, and I have done it many times. Replacing disks is the whole reason you have parity. Not risky at all if your array is working well. Just keep the original disk intact until you are happy with the rebuild.

     

    In any case, you must always have another copy of anything important and irreplaceable. Parity is not a substitute for backups.

     

     

     

    Thanks! Upgrading it is. I've ordered 2x20TBs to start off with and will follow https://docs.unraid.net/legacy/FAQ/replacing-a-data-drive/. I got the 4TBs at a reduced price from a friend so I figured I'd give them a shot. They will hopefully be easy to upgrade over time so I can make this build last a while before I need to switch into something that can hold more drives. I just hope it doesn't take 24+ hours to rebuild the drive, or if it does that Plex at least stays functional.

     

    After those I have other drives like 2x6TB I can replace so I should be able to squeeze a lot out of this. Technically my motherboard and case can support a few more drives but I'd have to get creative with how I mounted them (like replacing fans with drives...). 

  8. Everything I’ve checked seems to be running at the level it was before the server freaked out. Was pretty scary being unable to stop the array and reboot the server, would love some insight on what could have gone wrong so I can prevent this from happening again.
     

    I was able to get the VM to re-mount /mnt/user by rebooting. Still no idea why a parity check started when the system back back online. 

  9. That seemed to do the trick but some a few odd things happened. My VM doesn't have my array mounted and Windows acted like it was installed for the first time. It also wanted to run a Parity check when it came back up which was not normal. 

     

    I also deleted the docker.img thinking that was holding things up but it seems easy enough to restore those from previoulsly downloaded apps. 

  10. Hello,

     

    I stopped docker to increase it's alloted vdisk size as I noticed it was filling up a bit yeasteday before I cleanred out some logs. I shut down all the docker containers and disabled docker. I increased it's disk size setting and restarted docker. However the docker page would not load and whne I checked the logs it said  "docker.img is in-use, cannot mount". I decided I was going to try a clean restart to free up whatever was locking the file (I tried lsof and fuser on the docker image file and got no hits). The mover was running so I couldn't stop the array so I ran `mover stop` and then I could stop the array. However now it is complaining that my cache pool where appdata goes is busy and it won't stop the array:

     

    Oct  1 17:10:05 HaynesTower emhttpd: shcmd (1537): umount /mnt/cache-apps
    Oct  1 17:10:05 HaynesTower root: umount: /mnt/cache-apps: target is busy.
    Oct  1 17:10:05 HaynesTower emhttpd: shcmd (1537): exit status: 32

     

    Maybe the same thing that was locking the docker.img file? I tried to dump diagnostics but now all I'm seeing is a gray screen. However it looks like a file was created on my flash drive:
    haynestower-diagnostics-20231001-1706.zip

     

    Not sure if I should do a dirty bounce at this point or if there is a way around /mnt/cache-apps being busy and hanging the shutdown. What a mess.

    Thanks!

  11. Hello,

     

    I was thinking about shrinking my array via https://docs.unraid.net/legacy/FAQ/shrink-array/ so that I could then replace some 4TB drives with 20TB drives but this process seems tedious and I'm not sure it will be any faster than replacing a data drive one at a time https://docs.unraid.net/legacy/FAQ/replacing-a-data-drive/. The secound route will let me use the drives I want to replace until I'm ready to pop new ones in but won't let me replace all of the drives at once. 


    Here are the drives I'm thinking about replacing, prbably 1-2 at a time, with 20TB drives:


    image.png.d9bdd79de6567f93d162aa3e2cc75f65.png

     

    They are all "empty" except for whatver the file system writes to them which is ~7% of their total space. Which is the least risky way to go here? Shrinking the array seems to add risk when you are clearing the drives and drive numbers getting reassigned etc. Replacing a drive seems to add risk by relying on Parity to rebuild a drive for you, though I do have 2x20TB parity drives. 

     

    Probably shouldn't have added these 4TB drives in the first place as they do not provide much value but I am going to hit the high watermark point soon where they get written to and that would force me into the replacing-a-drive path. 

     

    Thanks!

  12. 1 minute ago, manofoz said:

    Not OP but I am having this problem right now. I can't even open the terminal from proper chrome but incognito mode works fine. This is what I get from `df -h /`

     

    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           63G  407M   63G   1% /

     

    A simple bounce of chrome on my desktop got everything working again. Weird that it got into this state but easy to fix...

  13. Hello,


    I am experiencing an issue with Plex that I have tried a few things to get around but so far no luck. Plex is becoming 100% unresponsive and requires a bounce. When this happens it spits out these logs:

     

    image.thumb.png.34339899ddaee25002fe469df6112078.png

     

    It seems to happen while Plex Meta Manager is applying new overlays and something triggers a scan. They recommended I boost the "Database Cache Size (MB)" which I set to 16 GB

     

    image.png.c1676b7903f08ad671f2bb7cc9e0fcd0.png

     

    I also read on here that it was faster to point Plex right at the cache than at the user share so I tried that:

     

    image.png.9e494cebb28165ad37d7622ab51ba030.png

     

    All of my DB files are on a Raid 1 2x2TB Crucial P3 Plus NVME SSD which is specced for 5000 MB/s read, 4200 MB/s write. I read I could move them to a RAM disk but that seems like a bad idea as the author of the guide for that had no plans for unexpected shutdowns and everything would be lost.

     

    image.thumb.png.2e5240c852c4bb6a13955dbfce836b2a.png

     

    Anyone have a simular issue and fine a way to overcome it? 

     

    Integrity check of the db came back "ok":

    /mnt/cache-apps/appdata/Plex-Media-Server/databasetools/plexmediaserver# "/mnt/cache-apps/appdata/Plex-Media-Server/databasetools/plexmediaserver/Plex SQLite" "/mnt/cache-apps/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db" "pragma integrity_check"
    ok

     

    Thanks!

  14. On 6/6/2023 at 10:27 AM, L0rdRaiden said:

    It's possible with the docker version with

          - /var/run/docker.sock:/var/run/docker.sock

     

     

    server:
      http_listen_port: 9080
      grpc_listen_port: 0
    
    positions:
      filename: /positions/positions.yaml
    
    clients:
      - url: http://10.10.40.251:3100/loki/api/v1/push
    
    scrape_configs:
    - job_name: system
      static_configs:
      - targets:
          - localhost
        labels:
          job: varlogs
          __path__: /host/log/*log
    
    - job_name: docker
      # use docker.sock to filter containers
      docker_sd_configs:
        - host: unix:///var/run/docker.sock
          refresh_interval: 15s
          #filters:
          #  - name: label
          #    values: ["logging=promtail"]    # use container name to create a loki label
      relabel_configs:
        - source_labels: ['__meta_docker_container_name']
          regex: '/(.*)'
          target_label: 'container'
        - source_labels: ['__meta_docker_container_log_stream']
          target_label: 'logstream'
        - source_labels: ['__meta_docker_container_label_logging_jobname']
          target_label: 'job'

     

      promtail:
        # run as root, update to rootless mode later
        user: "0:0"
        container_name: Mon-Promtail
        image: grafana/promtail:main
        command: -config.file=/etc/promtail/docker-config.yaml
        depends_on:
          - loki
        restart: unless-stopped
        networks:
          mon-netsocketproxy:
          mon-netgrafana:
          br1:
            ipv4_address: 10.10.40.252
        dns: 10.10.50.5
        ports:
          - 9800:9800
        volumes:
          # logs for linux host only
          - /var/log:/host/log
          #- /var/lib/docker/containers:/var/lib/docker/containers:ro
          - /mnt/user/Docker/Monitoring/Promtail/promtail-config.yaml:/etc/promtail/docker-config.yaml
          - /mnt/user/Docker/Monitoring/Promtail/positions:/positions
          - /var/run/docker.sock:/var/run/docker.sock
        labels:
          - "com.centurylinklabs.watchtower.enable=true"

     

     

    This looks really interesting but I don't have quite enough experience to interprite how to apply this. I'd love to have my logs set up with Loki and Promtail so I'm going to give it a shot. I think it would be valuable experince. 

     

    Is the first file a promtail-config.yml file? If so the seems easy enough to adapt. The second looks like an entry for a docker compose file. I tried that with the following that also includes the Loki dependency but Promtail immedialty closes both the console and log windows when I try to open them:

     

    version: "3"
    services:
      loki:
        image: grafana/loki:main
        ports:
          - 3100:3100
        command: -config.file=/etc/loki/local-config.yaml
      promtail:
        image: grafana/promtail:main
        ports:
          - 9800:9800
        command: -config.file=/etc/promtail/docker-config.yaml
        depends_on:
          - loki
        restart: unless-stopped
        volumes:
          # logs for linux host only
          - /var/log:/host/log
          #- /var/lib/docker/containers:/var/lib/docker/containers:ro
          - /mnt/user/appdata/promtail-config.yaml:/etc/promtail/docker-config.yaml
          - /mnt/user/appdata/promtail/positions:/positions
          - /var/run/docker.sock:/var/run/docker.sock
        labels:
          - "com.centurylinklabs.watchtower.enable=true"

     

    I am also missing a local-config.yaml. I am going to try one from here: https://grafana.com/docs/loki/latest/configure/examples/.

     

    Thanks! 

     

    Edit: I fixed the path error for the promtail config and used the local loki config from the website above and the promtail window doesn't dissapear immediatly anymore. Getting logs for both. 

     

    pomtail complaining about: 
     

    Quote

    level=warn ts=2023-09-26T20:55:42.868024887Z caller=client.go:419 component=client host=192.168.0.200:3100 msg="error sending batch, will retry" status=429 tenant= error="server returned HTTP status 429 Too Many Requests (429): Ingestion rate limit exceeded for user fake (limit: 4194304 bytes/sec) while attempting to ingest '8423' lines totaling '1048557' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased"

     

    Seems like a good sign, I will try the root user.

     

    Loki has a simular complaint:

     

    Quote

    level=warn ts=2023-09-26T20:56:35.531113139Z caller=grpc_logging.go:60 method=/logproto.Pusher/Push duration=10.891684ms msg=gRPC err="rpc error: code = Code(429) desc = entry with timestamp 2023-09-26 10:26:42.627270316 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 139B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.627448876 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 58B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.627456959 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 139B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.627593069 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 58B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.627597038 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 139B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.62774611 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 58B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.627752147 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 139B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.627887143 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 58B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.627890474 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 138B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nentry with timestamp 2023-09-26 10:26:42.628039619 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{container=\"binhex-sabnzbd\", logstream=\"stdout\"}' totaling 58B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limit can be increased',\nuser 'fake', total ignored: 1697 out of 13599 for stream: {container=\"binhex-sabnzbd\", logstream=\"stdout\"}"

     

  15. Hello, 

     

    I am struggling to configure the nginx_status so I can wire up a prometheous exporter and have some data. I was able to check the configure arguments of the container and I see that --with-http_stub_status_module is present. Now I am stuck trying to configure nginx to expose the endpoint. I am using the files imported by nginx.conf as documented here: https://nginxproxymanager.com/advanced-config/#custom-nginx-configurations. I think the http section is the best spot from what I've read so I tried the top and bottom to add many variations of the following:

     

    server {
        listen 127.0.0.1:80;
        server_name 127.0.0.1;
        location /nginx_status {
            stub_status on;
            allow 127.0.0.1;
            deny all;
        }
    }

     

    None seem to work (trying the servers IP, port 8010, commenting out the location restrictions, etc.) I've read that you can configure this in nginx.con but I have also read that it should go into it's own file in conf.d like this artical states: https://docs.nginx.com/nginx-amplify/nginx-amplify-agent/configuring-metric-collection/. Anyone get this woking with prometheous and know where I'm going wrong? 

     

    Thanks!!! 

     

    Edit - Adding it to the advanced tab of a proxy host got it going:

     

    image.png.11387f60269451d28fc51b9a54c727ee.png

  16. Hello,

     

    I was unable to boot from USB without going to UEFI BIOS. I fixed that in a horrific way. I simply changed which USB port the Samsung BAR plus usb drive was plugged in and broke everything. I got real excited since it started up but sheer panic took over when it hung at "Unable to enumerate USB device". I tired a few more ports, USB 2.0, USB 3.0, USB 3.2 all hung at "Unable to enumerate USB device". I then went back to where I started and hoped I could just manually boot from UEFI BIOS but nope this time it decided to boot on it's own and everything was fine. Not sure what just happened or how I can move forward diagnosing the issue. 

     

    Thanks!

  17. Update - I was able to get the USB to boot automatically after breaking everything. I simply decided to try a new USB port, a USB 2.0 one. The startup hung on "Unable to enumerate USB device". I tried a few other new USB ports and kept hitting that error. I then went back to the origional port and booted without toching anything and it started right up no problem. I am very confused, can you not change the USB port 

  18. 5 hours ago, Vr2Io said:

    To be honest, symptoms still match fast boot issue. I really hope LT can overcome this problem.

     

    If you confirm BIOS setting really save, then I would suggest you update Intel ME firmware Version 16.1.27.2176v2_S, but this seems need update under Windows ( may need manual command )

     

    Could I boot from a Win 11 flash drive, install the update, then boot back into the unRAID flash drive? 

×
×
  • Create New...