Arbadacarba

Members
  • Posts

    295
  • Joined

  • Last visited

Posts posted by Arbadacarba

  1. I have an internal slim blue ray drive plugged into a USB3 header on my mainboard... Along with the Unraid Boot Disk drive.

     

    The system has booted successfully for years and continues to do so.

     

    However, I have had a terrible time getting the optical drive to be recognized by Unraid... Doesn't show up in /Dev/ at all...

     

    Today I was making some other changes, and having needed and not been able to use the optical drive the other day, I thought I would investigate a little... I pulled it and its adapter out of the server and plugged them into my laptop... The worked fine. I then plugged it back into my server in a different port and found it showing up in System devices and the /Dev/ folder.

     

    Plugged it into the header it was always plugged into and it continued to work.

     

    Rebooted the server and:

     

    NO OPTICAL DRIVE

     

    Unplugged it and plugged it back in and it came back...

     

    Any Ideas?

     

    Is there a command to reset the USB header after the system has booted so I don't have to open the case up and hot plug it if I have to reboot?

     

    The attached diagnostic should show it booting up and then coming online when I replug it

     

    jupiter-diagnostics-20240313-1101.zip

  2. I ran the Diagnostics Download again and the list of files was much smaller... And did not include the files I had manually moved. Now I noticed they were restricted to the most recent added files.

     

    So I guess the question is why do those lines come by on screen during the process?

  3. I'm not surprised they are not there (though I see how that line could read like that)... The scroll on the screen during the diagnostics collection is complaining about it with the referenced line... It seems that unraid is confused that they are not there:

     

    sed -i 's/\/mnt\/systems\/Files\/FOLDER NAME REDACTED\/11972.jpg/\/\/..g\/.../g' '/jupiter-diagnostics-20231209-0925/logs/syslog.1.txt' 2>/dev/null

     

    I'm getting Tens of Thousands of these scrolling by during Diagnostics collection. The referenced files are the files that I moved to the share in the array...

     

    These were torrented files that I sorted into the correct Share... For some reason they seemed to stay in a share folder in the "Systems" pool rather then moving over to the spinning array.

     

    The "Systems" Pool is used for AppData, VM's and as a Download destination so that the Spinning drives don't spin so often. But it was getting pretty full so I moved the backlog off to the spinning main array.

  4. I was trying to include my diagnostics in another post and when I went to run them I get the window showing me progress but it starts displaying the names and locations of thousands of files on my server:

     

    sed -i 's/\/mnt\/systems\/Files\/FOLDER NAME REDACTED\/11972.jpg/\/\/..g\/.../g' '/jupiter-diagnostics-20231209-0925/logs/syslog.1.txt' 2>/dev/null

     

    And eventually stops responding...

     

    I recently moved about 2TB of files from my "systems" pool to my normal shares and they no longer appear to be in that "systems" pool... Has something not caught up to the change? The Files are accessible in the share where they should be...

     

    Is there a tool I need to run to rescan the file locations?

     

    Thanks

     

     

     

  5. My server is set up for multiple pirposes, but one of them is as my primary gaming machine... I have never noticed a problem with "Micro Stuttering" but thought I should be pinning some of the CPU cores just to be sure.

     

    My intention is to pin 6 Cores and threads and then keep an eye on how the remaining 4 cores keep up with the other tasks I use the server for.

     

    I have pinned them to the Gaming VM and set everything else up to use the rest:

     

    image.thumb.png.1fb6e35fb64e16e66f7cecb925909fdb.png

     

    I've noticed I get a little bit of action on HT 15 even when the VM is shut down:

     

    image.thumb.png.a59d55975cfb08d42d40b4a79bb3113a.png

     

    Have I done this wrong? How can I see what specifically is using that cpu?

     

    I occasionally get a flutter on one of the other CPU's but HT 15 is consistently in use.

     

    -------------------------------------------------------------------

     

    I was going to include my diagnostics, but I seem to have another problem... While creating the Diagnostics it appears to be listing every file on my server - not litterally but it's been going for 20 minutes with:

     

    sed -i 's/\/mnt\/systems\/Files\/- Torrents\/FolderName\/11972.jpg/\/\/..g\/.../g' '/jupiter-diagnostics-20231209-0925/logs/syslog.1.txt' 2>/dev/null

  6. Hmm, so what kind of read write speeds are you getting on the array?

     

    If you are not running a parity you should be getting close to the same speed you were getting in server 2019.

     

    Can you create a ram drive and do some speed test directly in Unraid:

     

     

    and see what kind of performance you are getting from the drives.

  7. Fascinating I've been trying to figure this out myself... I tried to play Tiny Tina yesterday and the results were horrible... The Test Demo was fine, but trying to play resulted in terrible stutters... Today I tried Halo CE... 1920x1080 and the stutters were horrible... But if I use my MX Anywhere 3 or my K600 (Trackpad) it works flawlessly... My G502 is unusable...

     

    I'm sure I've used the G502 before (I don't get a lot of time to play - So it's been a couple of months)... Has something changed?

  8. I've been running on a VM for 3 years now and have had ZERO problems... Would I implement it in a VM for a client? No, I make them buy Netgate appliances... Would I use it in my own business? Yup.

     

    My first thought is to take a GOOD LOOK at the System Devices, and then downgrade to a known good version of Unraid Take another look at the system devices and see if there is a difference when it comes to the Network cards... Maybe a new driver conflicting with the VM?

     

    Maybe try swapping out any gear you can... Simplify the config and see if the problem goes away? If you can't change the hardware maybe shut the pihole down for a little while.

     

    As for MrGrey... have you never tried something new? Pushed a boundary on what you thought was possible and needed a little support? I learn by breaking things every day and there are people on here that know so much about this stuff... I've had problems that people on this board had answers for that I barely recognized the words they used and I've been in this industry (successfully) for 30 years.

  9. I'm going to add, it's not just VM's... I've spent that last ten years eradicating Spinning Rust from every machine I work with. (Over 400 systems that I'm responsible for. I have over 100 machines that are in excess of 8 years old, but they run just fine in the medical environments that I work with so long as they have an SSD.)

     

    You find the VM laggy, but I wonder if you would feel the same way if you loaded the OS natively on the bare metal you are working with?

     

    I have over 100TB of storage on my machine and 80TB of that is rust... But none of my VM's are on anything other than SSDs

     

     

  10. I've run things like that in the Go file:

     

    #!/bin/bash
    # Start the Management Utility
    /usr/local/sbin/emhttp &

    # Disable power button short press
    cp boot/custom/scripts/acpi_handler.sh /etc/acpi

    # Enable Intel Quick Sync hardware acceleration for plex
    modprobe i915
    chmod -R 777 /dev/dri

    # Copy the VM icons for custom VM
    cp boot/custom/icons/* /usr/local/emhttp/plugins/dynamix.vm.manager/templates/images

    # Disable ACPI handler that locks up one core (deprecated?)
    # echo "disable" > /sys/firmware/acpi/interrupts/gpe6F

    # Enable FANCONTROL and copy conf file over to live system
    # cp boot/custom/scripts/fancontrol /etc
    # fancontrol &
    # echo Enter

    # Customise BASH coloring
    cp boot/custom/scripts/.bashrc /root

    # Install Sensor Drivers
    modprobe coretemp
    modprobe nct6775
    /usr/bin/sensors -s
    # modprobe lm75 - Old Mainboard

     

     

    go

  11. How do we use and submit the Debug mode...

    a couple of issues:

    1. How do we reorder the items in Dockers and VM's? I thought you just dragged them but can't seem to get that to work.
    2. I have a few Dockers that have been moved to the Folder but they still appear outside of the folder.
    3. The list of Dockers to include is spaced very strangely. (No wait... It's just centered instead of left justified)

    You have no idea how excited I am to have this working again. (I've come to the conclusion that I need to get out more)

  12. So I was able to build a qcow2 VHD (.img) with ventoy and Medicat on it. (Booted a Debian VM with a new 64G VHD attached to it, and ran the Medicat Linux installer)

     

    In a dedicated VM it boots and could in theory condense all my various Utility VMs down to 1, that if I keep the same ISO's on my EDC thumbdrive I could have the same tool collection with me when I go on site.

     

    But do I really have to boot another VM with this attached to access it? 

  13. So I've had a further idea... I have been using these various disk recovery tools in my home lab, to fix and duplicate drives... And occasionally I have to take the tools with me to help friends and family.

     

    I have recently tried using Medicat, and have been really happy with it. Having the tools it includes and adding ISO's of my own purchased tools.

     

    But I've started to try to find a way to use Medicat in my Unraid VMs. I've got a VM that boots from the Medicat Thumb Drive and that works, but again why can't I virtualize the USB drive?

     

    I've downloaded the Linux creation tool for Medicat, and it seems like I should be able to mount a VDisk in the terminal and run the Medicat installer into that. It would be so usefull to be able to access that Disk from Outside.

  14. I don't mean to hijack the thread, but I thought if I made the question a little more comprehensive we might get a response:

     

    I'm running:

    1 7 drive Array with single parity disk (Containing stored media - Mixed Disk Sizes) [xfs]

    1 2TB NVMe Cache drive [btrfs]

    1 System pool with 2 4tb SSD striped (with Dockers, VM, sand system folder) [btrfs]

     

    Is there any benefit to my switching these to ZFS?

     

    My system pool is where I wanted best performance, and would have used the NVME drive, but I got Terrible performance from the SSD's when mirrored, and a 8TB cache seemed wasteful.

     

    I'm not worried about the HOW TO, as I'm sure I can figure that out... But I just don't understand why everyone is so excited about ZFS

  15. I figured mine out... I'm using Fancy Zones to split my 32" monitor into four zones... All the same width (Half the screen (4K so 1920 each)) but varying heights.

     

    I generally have my homelab window in the top left and have dropped to 2 columns in 6.12.

     

    I played with it and set the border (Space around zones) to -4 (Had it at -3 before) and viola, 3 columns.

     

    image.thumb.png.8eef28dcd8eb1942559c12ef7018823f.png

     

    pfSense has the option of defining the number of columns for the dashboard. That would be helpful here.

     

    Now if only the Docker and VM folders app worked.