Jump to content

KptnKMan

Members
  • Posts

    270
  • Joined

  • Last visited

Posts posted by KptnKMan

  1. Hi, This is a post to notify I'm VERY soon (in the next few weeks) selling major parts from my homelab and main 2 Unraid systems (In my Signature).

    Mods please, I ask that I can list here for visibility and put more topics later.

    Mods please let me know if I'm doing this wrong, please don't delete my topic.

     

    Selling is due to family emergency requiring me to internationally relocate to help family.

    So... I'm looking for a good home for these items.

     

    Everything is functioning in what I would consider great condition, no issues, once I've taken everything apart I'll post pics.

    I can try and ship if it is worth it, but pickup is preferred, from area of The Hague/Delft/Rotterdam in Netherlands.

    Buyer would pay shipping.

     

    Items soon to be for sale:

     

    [4Sale] 3x APC 1000XL UPS, each with APC AP9631 Network Interface Card and original APC RBC7 Battery (Installed 2022-09-20).

    200€ each

     

    [4Sale] Cooler Master Stacker 810 case, original from 2006, unmodified in excellent condition, all fans, with all accessories and original box.

    80€

     

    [4Sale] Rackmountable 4U Black Case, space for 6x 5.25" bays & 4x 2.5" bays, all fans, not sure the exact model but will find and provide pics, no box.

    50€

     

    [4Sale] CoolerMaster MasterBox Lite 5 Black Case with window, unused, no box.

    40€

     

    [4Sale] 2x IcyBox RAID Caddy 5.25" to 5x 3.5" (looks like this) with SATA cables and all parts.

    40€ each

     

    [4Sale] 2x SATA SAS HDD Cage 5.25" to 5x 3.5" (looks like this) with SATA cables, SATA Power cables and all parts like drive fitters.

    20€ each

     

    [4Sale] AMD Ryzen 3600 CPU, excellent condition with cooler.

    60€

     

    I will possibly have more for sale, I will add if relevant items.

  2. I've been considering turning this on for a while.

     

    Just asking, is there a timeline or idea for when the auto option will be implemented?

  3. Thanks, I totally forgot about docker interactive console.

     

    I'm trying to resolve a problem where I think the webserver files are wrong permissions.

     

    In particular I think the followind dirs:

     

    - /var/www/html/

    - /var/www/html/config

    - /var/www/html/data

     

    Do you know the correct permissions and owners for these dirs?

     

     

  4. On 2/25/2024 at 7:57 PM, Antoni Żabiełowicz said:

    Just go with "docker exec -u 99 nextcloud php cron.php". On Unraid nextcloud files are owned by user 99. There's no www-data. Sometimes you're gonna get a warning because of using different user than www-data but it's nothing to worry about.

    Thanks, that helps with docker exec.

     

    However, I still cannot su within the container console, and its annoying because my container has stopped working and I have no su/sudo access.

     

    How can I get the root password?

  5. On 2/14/2020 at 2:03 PM, randomninjaatk said:

    Option 2 seems easiest, created the following script below and set it to run every 5 min using the "Users Scripts" plugin

    #!/bin/bash
    docker exec -u www-data Nextcloud php -f /var/www/html/cron.php
    exit 0

    Thanks!

     

    Hi, I've been trying to get this to work for some time and I'm unable.

    When I run the command, I get:

    Console has to be executed with the user that owns the file config/config.php
    Current user id: 33
    Owner id of config.php: 99

     

    I'm trying to get cron running using the mentioned 2nd method.

    When I check the /etc/passwd file, there is no user 99.

    Any ideas?

     

    Also, whenever I try to su within the container, I get a password prompt, but what is the password?

    Additionally, when I try to sudo I get an error that sudo is not found.

    Any help would be appreciated?

  6. I know this is an old thread, but nothing here worked for me.

     

    What worked for me on unraid 6.12.3:

     

    Check nginx processes (should return process list):

    netstat -tulpn | grep nginx

     

    netstat should return something like (show nginx running on 80 and 443):

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp      512      0 192.168.22.18:80        0.0.0.0:*               LISTEN      8803/nginx: master
    tcp        0      0 127.0.0.1:443           0.0.0.0:*               LISTEN      8803/nginx: master
    tcp        0      0 127.0.0.1:80            0.0.0.0:*               LISTEN      8803/nginx: master

     

    kill nginx proxesses (actually is kill process on 80 and 443):

    sudo fuser -k 80/tcp
    sudo fuser -k 443/tcp

     

     Check nginx processes again (should be blank):

    netstat -tulpn | grep nginx

     

    start nginx processes:

    /etc/rc.d/rc.nginx start

     

    Maybe this helps someone.

    • Like 1
    • Upvote 1
  7. To everyone responding in this thread, READ THE THREAD, all of it.

     

    Read the instructions I left and the responses. Read the linked Reddit post.

     

    The latest unraid 6.12.x DOES NOT REQUIRE THE CUSTOM BIOS in the first few comments.

     

    I left an updated summary just a little bit back, that definitely works on both my systems.

     

    A few extra pointers:

    -make sure your VM works without REBAR enabled first, mine did.

    -make sure to use the unraid VM REBAR bios, without checking, ita not rhe SeaBIOS, the other one. Make sure to use the UEFI-enabled VM BIOS.

    -make sure you are passing through both the gfx card AND the cards sound device. I would suggest no other audio devices in the VM.

    -read other threads about passthrough FIRST and get that working, this thread isn't diagnosing basic passthrough issues like code 43.

    -these instructions REQUIRE manual XML editing, if you then go to update the VM in GUI mode, the custom REQUIRED XML edits will be lost. Be aware.

  8. Some responses I've seen so far:

    # zpool import /dev/sda
    cannot import '/dev/sda': no such pool available

     

    # zpool import
    no pools available to import

     

    # zfs mount 
    sys_sync2                       /mnt/disks/sys_sync2

     

    # zfs list
    NAME        USED  AVAIL     REFER  MOUNTPOINT
    sys_sync   14.4T   128M     14.4T  /mnt/disks/sys_sync1
    sys_sync2  12.5T  1.95T     12.5T  /mnt/disks/sys_sync2

     

    I also tried:

    # zfs mount sys_sync

    but this appears to hang and do nothing.

  9. Hi all, apparently its my luck that this happens a day before I go travelling.

     

    I have 2 new 16TB Western Digital "My Book" that I have connected to my unraid.

    I decided about a week ago to format them both with ZFS (To try out ZFS), so I did and filled them with data.

     

    Today, I shut down both servers and moved the disks to the second server, so I could back them up.

    Upon starting up, the first disk will not mount, but the second appears to mount fine, both were formatted the same way without errors.

     

    Checking the disk log I see this:

    text  error  warn  system  array  login  
    
    Jul 29 23:13:14 soundwave kernel: sd 1:0:0:0: [sda] Very big device. Trying to use READ CAPACITY(16).
    Jul 29 23:13:14 soundwave kernel: sd 1:0:0:0: [sda] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
    Jul 29 23:13:14 soundwave kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks
    Jul 29 23:13:14 soundwave kernel: sd 1:0:0:0: [sda] Write Protect is off
    Jul 29 23:13:14 soundwave kernel: sd 1:0:0:0: [sda] Mode Sense: 47 00 10 08
    Jul 29 23:13:14 soundwave kernel: sd 1:0:0:0: [sda] No Caching mode page found
    Jul 29 23:13:14 soundwave kernel: sd 1:0:0:0: [sda] Assuming drive cache: write through
    Jul 29 23:13:14 soundwave kernel: sda: sda1
    Jul 29 23:13:14 soundwave kernel: sd 1:0:0:0: [sda] Attached SCSI disk
    Jul 29 23:13:57 soundwave emhttpd: WD_My_Book_25ED_32504A353237474A-0:0 (sda) 512 31251759104
    Jul 29 23:13:57 soundwave emhttpd: read SMART /dev/sda
    Jul 29 23:14:03 soundwave unassigned.devices: Disk with ID 'WD_My_Book_25ED_32504A353237474A-0:0 (sda)' is not set to auto mount.
    Jul 29 23:29:00 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync2'...
    Jul 29 23:29:03 soundwave unassigned.devices: Successfully mounted 'sda1' on '/mnt/disks/sys_sync2'.
    Jul 29 23:29:03 soundwave unassigned.devices: Device '/dev/sda1' is not set to be shared.
    Jul 29 23:30:01 soundwave unassigned.devices: Unmounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync2'...
    Jul 29 23:30:13 soundwave unassigned.devices: Successfully unmounted 'sda1'
    Jul 29 23:38:38 soundwave kernel: sd 1:0:0:0: [sda] Spinning up disk...
    Jul 29 23:38:57 soundwave kernel: sd 1:0:0:0: [sda] Very big device. Trying to use READ CAPACITY(16).
    Jul 29 23:38:57 soundwave kernel: sd 1:0:0:0: [sda] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
    Jul 29 23:38:57 soundwave kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks
    Jul 29 23:38:57 soundwave kernel: sd 1:0:0:0: [sda] Write Protect is off
    Jul 29 23:38:57 soundwave kernel: sd 1:0:0:0: [sda] Mode Sense: 47 00 10 08
    Jul 29 23:38:57 soundwave kernel: sd 1:0:0:0: [sda] No Caching mode page found
    Jul 29 23:38:57 soundwave kernel: sd 1:0:0:0: [sda] Assuming drive cache: write through
    Jul 29 23:38:57 soundwave kernel: sda: sda1
    Jul 29 23:38:57 soundwave kernel: sd 1:0:0:0: [sda] Attached SCSI disk
    Jul 29 23:38:58 soundwave unassigned.devices: Disk with ID 'WD_My_Book_25ED_3348475A58554C4E-0:0 (sda)' is not set to auto mount.
    Jul 29 23:39:01 soundwave emhttpd: read SMART /dev/sda
    Jul 29 23:40:04 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:40:06 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:40:30 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:40:33 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:40:43 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync'...
    Jul 29 23:40:45 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:41:32 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync'...
    Jul 29 23:41:35 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:42:49 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync'...
    Jul 29 23:42:52 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:43:08 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:43:11 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:43:22 soundwave ool www[19861]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.unassigned 'detach' 'sda' 'true'
    Jul 29 23:43:24 soundwave unassigned.devices: Device 'sda' has been detached.
    Jul 29 23:45:44 soundwave kernel: sd 1:0:0:0: [sda] Very big device. Trying to use READ CAPACITY(16).
    Jul 29 23:45:44 soundwave kernel: sd 1:0:0:0: [sda] 31251759104 512-byte logical blocks: (16.0 TB/14.6 TiB)
    Jul 29 23:45:44 soundwave kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks
    Jul 29 23:45:44 soundwave kernel: sd 1:0:0:0: [sda] Write Protect is off
    Jul 29 23:45:44 soundwave kernel: sd 1:0:0:0: [sda] Mode Sense: 47 00 10 08
    Jul 29 23:45:44 soundwave kernel: sd 1:0:0:0: [sda] No Caching mode page found
    Jul 29 23:45:44 soundwave kernel: sd 1:0:0:0: [sda] Assuming drive cache: write through
    Jul 29 23:45:44 soundwave kernel: sda: sda1
    Jul 29 23:45:44 soundwave kernel: sd 1:0:0:0: [sda] Attached SCSI disk
    Jul 29 23:45:45 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:45:48 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:45:57 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:45:59 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:46:04 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:46:07 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:51:02 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:51:04 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:51:34 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:51:37 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:52:02 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:52:05 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 29 23:57:53 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 29 23:57:56 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 30 00:00:49 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 30 00:00:52 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    Jul 30 00:01:20 soundwave unassigned.devices: Mounting partition 'sda1' at mountpoint '/mnt/disks/sys_sync1'...
    Jul 30 00:01:22 soundwave unassigned.devices: Mount of 'sda1' failed: 'Cannot determine Pool Name of '/dev/sda1''
    
    ** Press ANY KEY to close this window ** 

     

    In the UI, I can see the disk mount button but it just fails every time I try to mount it.

    image.png.77d1cee1ee9d4a0a74a31f2dd4952552.png

     

    I can't see any other obvious errors, I tried detatching and disconnecting both disks, then reconnecting and same scenario (disk 1 fails to mount).

     

    This is the fist time I'm using ZFS, and I'm not sure where to start.

    I can't find any other details in the Ui, there are no SMART errors showing, its a new disk a week old.

     

    Can anyone help?

    • Upvote 1
  10. Hi all,

    I'm looking for some advice or experiences with running Unraid on QNAP/Synology/Other dedicated hardware.

    Hopefully someone out there can provide some general or specific recommendations for this?

     

    I'm a pretty happy user of unraid, and I'm looking to expand with a dedicated NAS system for array network storage.

    At the moment, I have 2 Unraid systems on AMD Ryzen that I use daily, but I'm looking to reduce my electricity usage by moving storage and essential storage to a centralised NAS unit, and leave my primary/secondary as compute nodes that I can switch off when not in use.

     

    The idea is to be able to offload tasks from my more hungry systems and make a NAS unit my always-on 24/7 system.

     

    I've been looking into specs and videos of QNAP and Synology NAS boxes, and I have been reading that Unraid runs on these without too much difficulty, but I've read mixed reports about how much.

     

    In a perfect world I'd love to get something like a Synology DS2422+ with 12 bays and just expand as I go, but I'm not sure if this will run Unraid at all.

    Realistically, I've tried to break it down to a definite set of requirements that I think will work for my background needs/workloads (NAS, plex, sonarr/radarr, wordpress, nextcloud).

     

    My requirements:

    - 8+ 3.5" bays for array

    - m.2 NVME port(s)

    - 10Gbit Networking

    - Preferably under 1k euros/dollars for unit

    - Preferably new

     

    Nice to have:

    - GPU for Plex hardware acceleration

    - Enough oomph for VMs

     

    Units I've looked at:

    - Synology DS1821+ (8 bay, m.2 slots)

    - QNAP TS-832PX-4G (8 bay, 10Gbit onboard)

    - Synology DS2422+ (12 bay

  11. Just upgraded both my main production systems from 6.11.5 direct to 6.12.1.

     

    Everything works great, I can't complain.

     

    Well done guys, another perfect release (for me at least).

     

    Main specs, as a snapshot in case this changes:
     

    UnRAID1 (6.12.1 Pro) Primary: Ryzen 9 5950x, ASUS TUF GAMING X570-PLUS (WI-FI), 128GB (4x32GB) Kingston Unbuffered ECC PC4-25600 DDR4-3200, Gigabyte RTX3090 Turbo 24G

     

    UnRAID2 (6.12.1 Pro) Backup: Ryzen 5 3600, ASUS TUF GAMING X570-PRO (WI-FI), 128GB (4x32GB) Micron Unbuffered ECC PC4-25600 DDR4-3200, EVGA GTX1080Ti FTW3 GAMING

    • Like 1
    • Thanks 1
  12. On 6/15/2023 at 12:41 PM, KptnKMan said:

    Hi @Skitals I took another look at the kernel patch you linked earlier.

    As far as I can tell, this patch is present in kernel 6.1 since v6.1-rc1 which is now present in unraid 6.12.

     

    So I wanted to ask, on the surface do you know if there would be any obvious reason that this wouldn't be present, or not work in unraid 6.12?

     

    Also, I'm yet to test 6.12 so I haven't personally verified any issues of missing ReBar in unraid 6.12 using kernel 6.1.

     

    @jonathanselye can you confirm and describe what you've done to test on unraid 6.12, what hardware you use, and what your exact results were?

    @Skitals and @jonathanselye, I just upgraded from 6.11.5 direct to 6.12.1.

     

    I can confirm that everything works on the Unraid 6.12.1 stock kernel 6.1.34.

     

    My daily VM has ReBar enabled.

     

    I can report any issues encountered, but everything seems to work as of right now.

  13. 21 hours ago, jonathanselye said:

    how do i get my ideal bar size?

    You will need to determine that yourself, using the provided scales in the script they are written out, but essentially you need to pick one that your card memory fits in (and is supported). You haven't stated if your Arc A770 is 8GB or 16GB so I would suggest starting there for a hint. You can also try listing the compatible Bar sizes with with this command:

    lspci -vvvs {YOUR_GPU_ID_HERE} | grep "BAR"

    You should see something like this:

    root@unraid1:~# lspci -vvvs 0b:00.0 | grep "BAR"
            Capabilities: [bb0 v1] Physical Resizable BAR
                    BAR 0: current size: 16MB, supported: 16MB
                    BAR 1: current size: 32GB, supported: 64MB 128MB 256MB 512MB 1GB 2GB 4GB 8GB 16GB 32GB
                    BAR 3: current size: 32MB, supported: 32MB

     

     

    21 hours ago, jonathanselye said:

    will it work if i already unbinded the gpu to unraid?[using vfio-pcie]

    I might be wrong, but I don't think you need to unbind it.

    That is what the script is for, I believe, running at Array Startup.

     

     

    21 hours ago, jonathanselye said:

    and i only have </devices> line, is that the same?

    I'm not sure what you mean, if you have </devices> then you will certainly have <devices> somewhere, or you've got bigger problems.

    Follow the instructions, and read your XML config carefully, line by line. Read upwards from each line beginning to find the opening <devices>.

     

     

    21 hours ago, jonathanselye said:

    in device manager there is a new device called "NF I2C Slave Device" when i tried updating the driver both online and latest virtio the drivers seems not available yet so i think it is working the driver just isnt available yet.

    Doing a quick google search led to this reddit post which indicates this device is an RGB controller on the Arc card.

    This is nothing to do with virtio drivers, its something on your card. Depending on the manufacturer, download their drivers or accompanying config application for their RGB control.

  14. 2 minutes ago, jonathanselye said:


    I see i will try your method later and will comeback for results.

    Yes it benefits greatly from rebar even in gaming, my mother board is x570 proart cpu is ryzen 5950x everything is already enabled in your instruction in bios level, because when i used proxmox i was able to utilize rebar, there are still some transcoding in the queue so i cant do it now i will comeback later with results.

    Thanks!

    Cool, another AMD Ryzen user, and damn that's a nice motherboad. I considered it myself but I went for the TUF GAMING X570-PLUS instead.

    I've been gathering evidence that AMD platform users need to do a few more hoops than Intel users, so definitely follow everything I did that worked for me.

     

    Also, I updated my summary comment with a link and details of the Rebar UserScript that I use to set my ReBar size, I'd recommend you check this out and do the same.

     

    Best of luck.

  15. 12 minutes ago, jonathanselye said:

    Apologies for my lack of information i actually did not apply the patch for 6.12 because i might break some things and i am asking this thread for the go signal if it is safe.

    Yeah, I wouldn't expect that you would need to apply the patch for unraid 6.12, because to my understanding it should already be in the kernel 6.1.

    The patch was built on an earlier kernel 5.19.17, with kernel 6.1 additions, so that sounds like a bad idea. I also would not recommend trying to run the custom compiled kernel under unraid 6.12, I suspect you'll have a bad time.

     

    I think you do need to make sure that you have followed all the other requirements/steps I summarised in an earlier comment in this thread, that should give you a decent baseline for cmparison.

     

    I mean here, as far as I can tell this is still the best summary to make this work: 

     

    Interesting that you're using an Intel Arc GPU, I've read a lot that those GPUs definitely benefit from ReBar.

    I'd say in theory it should work if you set the appropriate parameters, so check the summary linked above.

     

    Are you running an Intel or AMD CPU/Motherboard platform?

    In particular, what CPU and motherboard are you running your Arc A770 on?

     

    Let us know how you get on.

  16. Hi @Skitals I took another look at the kernel patch you linked earlier.

    As far as I can tell, this patch is present in kernel 6.1 since v6.1-rc1 which is now present in unraid 6.12.

     

    So I wanted to ask, on the surface do you know if there would be any obvious reason that this wouldn't be present, or not work in unraid 6.12?

     

    Also, I'm yet to test 6.12 so I haven't personally verified any issues of missing ReBar in unraid 6.12 using kernel 6.1.

     

    @jonathanselye can you confirm and describe what you've done to test on unraid 6.12, what hardware you use, and what your exact results were?

  17. 10 minutes ago, jonathanselye said:

    actually the stable is already released but unfortunately it is not natively supported(rebar)

    Oh damn, it was released yesterday, I can see that now.

    I guess I'll get around to testing that at some point.

     

    Well, @jonathanselye can you at least tell us more about what you are doing, what you are running, and EXACTLY what you've encountered?

    So far you've made very clear that it doesn't work, but we can't mind read so have literally no idea what could be wrong.

  18. 7 minutes ago, jonathanselye said:

    How to make this work with unraid 6.12? which natively has 6.1 kernel.

    I haven't tested 6.12 yet (waiting for stable release) but my understanding is that with unraid 6.12 being native on kernel 6.1, this should already be implemented and not need to use the custom kernel.

     

    You will still need to do all the other modifications that I listed earlier for convenience, to my understanding.

     

    If anyone has tested this on the latest RC release, perhaps they can shed more light on if that is the case?

  19. Hi guys, I reported the problems I'm having a few posts ago back in mid-February, but I'm still having issues.

    I was encountering an error code 1500 reporting that my backups are over 5.5TB, even though they are not.

     

    I seem to have noticed that its not backing up my symlinked directories though, even though Cloudberry claims it supports symlinks. I have a few symlinked directories that get updated regularly via a backup script, the script locally copies files from remote servers and updates the dir symlink to the backup of "latest".

    In my server is a single directory called "AWS_Syncs" that I symlink other dirs under (No remote or inaccessible links) so that they can be updated and backed up transparently under a single dir.

    I have docker mapped my host /mnt to containers /storage dir, is this correct?

    Should I be mapping unraid host /mnt to container /mnt instead?

     

    When I setup my backup, I can browse the symlinked dirs, and select anything I need, which for me is everything.

    image.png.6cf39afb8eec10280f4e91da312a9396.png

     

    However, I think Cloudberry is having trouble at the time of backup because when looking at backups I cannot see anything being backed up from these dirs:

    image.png.774fbe6ec739b50e9c1a292aacd0fe9b.png

     

    Is there a permissions issue, even though they seem to be accessible?

    Maybe there is a set of permissions or access that Cloudberry is looking for?

     

    I've played with the "Backup Symlinks" setting, and have not had any success yet.

    image.png.f04761fb954fe6f5bcb6ab5bfaa8b3be.png

     

    Anyone have ideas?

  20. Hey @N3z maybe can help, but would also be good if you can post your full specs because it helps understand specific hardware issues.

    Stuff like what's in my signature or others have posted here, you should especially indicate what GPU and CPU/platform you're using.

     

    Anyway, I see you're on AMD with a 4070Ti.

    Here is the checklist that I figured would be important:

    On 1/31/2023 at 2:05 AM, KptnKMan said:

    So looks like the checklist to enable this:

    - Host BIOS Enable ReBAR support

    - Host BIOS Enable 4G Decoding

    - Enable & Boot Custom Kernel syslinux configuration (near beginning of this thread)

    - Boot Unraid in UEFI Mode

    - VM must use UEFI BIOS

    - VM must have the top line of XML from <domain type='kvm'> to:

    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

    - VM must have added the following (after the </device> line, before the </domain> line):

      <qemu:commandline>
        <qemu:arg value='-fw_cfg'/>
        <qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
      </qemu:commandline>

     

    I highlighted maybe where you need to look for issues, if you have everything else done?

    If you're actually booting the new patched kernel (I found that for AMD the new kernel is a must), and everything else is in place then you need to create a userscript to run "At Startup of Array" that does the unbind.

     

    At least that's what I do, and seems to work.

  21. 7 hours ago, Trozmagon said:

     

    CPU: i9-11900k

    MOBO: MSI MEG z590i Unify

    RAM: Patriot Viper 4 Blackout Series DDR4 64GB (2 x 32GB) 3600MHz

    GPU: Gigabyte RTX 3090 Turbo Edition

    PSU: Asus ROG Loki SFX-L 850W

    SSD: 2x 970 EVO Plus 2TB (Cache Drives)

    HDD Internal: 4x Seagate Barracuda 2tb 2.5"

    HDD External: 5x Seagate Barracuda 8tb 3.5"

    HDD External Caddy: Terramaster D5 Thunderbolt 3 (RocketRAID 2720)

     

    It could be worth noting that my Unraid USB boot drive is passed through from the Thunderbolt 3 dock, this allowed me to free up the internal USB controller on the Motherboard to passthrough to my Windows 11 VM. Also aside from adding some drives and upgrading my PSU recently my hardware has been the same since owning the 3080 and the 6900 XT.

     

    Hmm, I wonder if this is related to my AMD Ryzen configuration?

    I've noticed some differences when using an AMD Ryzen setup, but honestly its been pretty great for 99% of the time I've had it.

    This kernel patch allowed me to properly pass the black EFI bootup screen and get in-VM ReBAR support.

    Before, I could only pass through my 3090 by booting Legacy BIOS on host.

     

    Posting in case it changes in future:

    CPU: Ryzen 9 5950x

    Mobo: ASUS TUF GAMING X570-PLUS (WI-FI)

    Memory: 128GB (4x32GB) Kingston Unbuffered ECC PC4-25600 DDR4-3200

    GPU: Gigabyte RTX3090 Turbo 24G

    Storage: 1x980 Pro 1TB NVME (Cache+VMs), 1x970 EVO Plus 1TB NVME (VMs), 860 EVO 256GB SSD (VMs), 840 EVO 1TB SSD (VMs), 2x WD Red 8TB (Parity), 6x WD Red 8TB (Storage).

    Networking: 10Gbit with Mellanox ConnectX-3 Dual-port NIC, 1Gbit with onboard Realtek L8200A.

  22. On 1/31/2023 at 2:05 AM, KptnKMan said:

    Looks like I figured it out, I had left out the steps to add the extra lines to my Win11 VM, which enabled 64GB ReBAR support.

     

    So looks like the checklist to enable this:

    - Host BIOS Enable ReBAR support

    - Host BIOS Enable 4G Decoding

    - Enable & Boot Custom Kernel syslinux configuration (near beginning of this thread)

    - Boot Unraid in UEFI Mode

    - VM must use UEFI BIOS

    - VM must have the top line of XML from <domain type='kvm'> to:

    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

    - VM must have added the following (after the </device> line, before the </domain> line):

      <qemu:commandline>
        <qemu:arg value='-fw_cfg'/>
        <qemu:arg value='opt/ovmf/X-PciMmio64Mb,string=65536'/>
      </qemu:commandline>

     

    After that, looks like everything worked for me as well.

    I'm just summarising this for anyone looking for a complete idea.
     

    I'll be testing performance over the next weeks as well to see if I'm seeing any improvement.

     

    This is great, exactly what I've been waiting for!

    An update from my side, I've tested this multiple times now.

    The custom ReBAR-enabled firmware is definitely necesarry for me, I've tried booting from the standard kernel enough times.

    When I boot standard, booting my 3090-attached VM, I get a black screen with nothing but a blinking text cursor on the screen.

    I've tried this a few many times now, and can confirm it.

     

    On 2/2/2023 at 6:52 PM, Trozmagon said:

     

    Just a heads up, the custom kernel wasn't necessary for me with my RTX 3090, just adding the additional xml to my VM config was enough.

    What hardware are you using?

    Can you list out your servers specs and unraid version?

     

×
×
  • Create New...