Soon™️ 6.12 Series


starbetrayer

Recommended Posts

4 hours ago, FlyingTexan said:

That’s the plan. Native support in unraid for the Plex docker. I know Plex will need to update its drivers as well.

You can get Arc running under 6.0 with force probe. This is on A770

 

root@computenode:~# cat /boot/config/modprobe.d/i915.conf 
options i915 force_probe=56a0
root@computenode:~# 

root@computenode:~# ls /dev/dri/
by-path/  card0  card1  renderD128  renderD129

root@computenode:~# lspci -vs 03:00.0
03:00.0 VGA compatible controller: Intel Corporation DG2 [Arc A770] (rev 08) (prog-if 00 [VGA controller])
        Subsystem: Intel Corporation Device 1020
        Flags: bus master, fast devsel, latency 0, IRQ 149, IOMMU group 19
        Memory at 50000000 (64-bit, non-prefetchable) [size=16M]
        Memory at 60e0000000 (64-bit, prefetchable) [size=256M]
        Expansion ROM at 51000000 [disabled] [size=2M]
        Capabilities: [40] Vendor Specific Information: Len=0c <?>
        Capabilities: [70] Express Endpoint, MSI 00
        Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable+ 64bit+
        Capabilities: [d0] Power Management version 3
        Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [420] Physical Resizable BAR
        Capabilities: [400] Latency Tolerance Reporting
        Kernel driver in use: i915
        Kernel modules: i915


root@computenode:~# lspci -vs 00:02.0
00:02.0 VGA compatible controller: Intel Corporation AlderLake-S GT1 (rev 0c) (prog-if 00 [VGA controller])
        DeviceName: Onboard - Video
        Subsystem: Micro-Star International Co., Ltd. [MSI] AlderLake-S GT1
        Flags: bus master, fast devsel, latency 0, IRQ 135, IOMMU group 2
        Memory at 6118000000 (64-bit, non-prefetchable) [size=16M]
        Memory at 4000000000 (64-bit, prefetchable) [size=256M]
        I/O ports at 5000 [size=64]
        Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
        Capabilities: [40] Vendor Specific Information: Len=0c <?>
        Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
        Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable+ 64bit-
        Capabilities: [d0] Power Management version 2
        Capabilities: [100] Process Address Space ID (PASID)
        Capabilities: [200] Address Translation Service (ATS)
        Capabilities: [300] Page Request Interface (PRI)
        Capabilities: [320] Single Root I/O Virtualization (SR-IOV)
        Kernel driver in use: i915
        Kernel modules: i915

root@computenode:~# cat /var/log/syslog | grep i915
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] VT-d active for gfx access
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] Using Transparent Hugepages
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/adls_dmc_ver2_01.bin (v2.1)
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] GuC firmware i915/tgl_guc_70.1.1.bin version 70.1
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] HuC authenticated
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] GuC submission disabled
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] GuC SLPC disabled
Jan 16 19:20:45 computenode kernel: [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.0 on minor 0
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] Incompatible option enable_guc=3 - HuC is not supported!
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] Cannot find any crtc or sizes
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] VT-d active for gfx access
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: vgaarb: deactivate vga console
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: BAR 0: releasing [mem 0x50000000-0x50ffffff 64bit]
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: BAR 2: releasing [mem 0x60e0000000-0x60efffffff 64bit pref]
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: BAR 2: no space for [mem size 0x400000000 64bit pref]
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: BAR 2: failed to assign [mem size 0x400000000 64bit pref]
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: BAR 0: assigned [mem 0x50000000-0x50ffffff 64bit]
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] Failed to resize BAR2 to 16384M (-ENOSPC)
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: BAR 2: assigned [mem 0x60e0000000-0x60efffffff 64bit pref]
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] Local memory IO size: 0x0000000010000000
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] Local memory available: 0x00000003fa000000
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] Using a reduced BAR size of 256MiB. Consider enabling 'Resizable BAR' or similar, if available in the BIOS.
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] Finished loading DMC firmware i915/dg2_dmc_ver2_06.bin (v2.6)
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] Cannot find any crtc or sizes
Jan 16 19:20:45 computenode kernel: i915 0000:00:02.0: [drm] Cannot find any crtc or sizes
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] GuC firmware i915/dg2_guc_70.1.2.bin version 70.1
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] GuC submission enabled
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] GuC SLPC enabled
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] GuC RC: enabled
Jan 16 19:20:45 computenode kernel: [drm] Initialized i915 1.6.0 20201103 for 0000:03:00.0 on minor 1
Jan 16 19:20:45 computenode kernel: fbcon: i915drmfb (fb0) is primary device
Jan 16 19:20:45 computenode kernel: i915 0000:03:00.0: [drm] fb0: i915drmfb frame buffer device
Jan 16 19:20:49 computenode kernel: i915 0000:00:02.0: [drm] *ERROR* Unclaimed access detected prior to suspending

 

  • Like 2
  • Thanks 1
Link to comment
12 hours ago, Pri said:

Ya know, since you're thinking about changing the "unRAID array" to be a "Primary Pool" perhaps now would be a good time to think about getting rid of the concept of primary and secondary systems entirely.

 

And instead just make it all be pools by which I mean users can simply make a pool, name it, select what storage devices are apart of it and what kind of disk management system they want that pool to use. Be it the unRAID Array, ZFS or BTRFS and the kind of RAID mode - Including XFS if they just want a single device pool without RAID.

 

Then with the share system you can target any pool and select what mover does, being able to move from one pool to any other pool. And also select which pool should act as a cache for another pool. All generalised and standardised.

 

I think this will be simpler for new users to grasp especially since a few updates ago we gained the ability to make multiple cache pools and now we're gaining ZFS and you're thinking about renaming unRAID array to Primary Pool and I assume that means other kinds of pools will be called Secondary Pools which if someone only wants ZFS and doesn't want to use unRAID array's will be a little confusing.

 

I also think this is a great way to introduce the concept of multiple unRAID array's for users who want that. Since it's all pools you can just tell someone, sure make two pools and just set their modes to both be unRAID Arrays. Pretty simple in my mind.

 

This is exactly where we are headed!  Can I recruit you to rewrite the wiki? (kidding, actually only somewhat kidding :)

It won't all happen in 6.12 release.

  • Like 2
  • Haha 2
Link to comment
8 hours ago, orhaN_utanG said:

Hello Guys, noob again.

 

I'm currently using TrueNAS Scale with 4 HDD's. I guess there is no way to switch to Unraid without formatting the drives and starting over, correct? Just want to doublecheck before i begin to copy/save 20 TB of data.

 

 

We plan to support this.  The difference is that Unraid utilizes partition 1 on a storage device as the zfs data partition, where-as TrueNAS sets up a small partition 1 which can function as a swap volume, and sets partition 2 as the zfs data partition.  There are a couple of ways we can go about handling this... t.b.d.

Link to comment
18 minutes ago, limetech said:

There are a couple of ways we can go about handling this... t.b.d.

Since we are discussing massive changes in array handling I'd like to submit a hair brained idea. Use the same address space that the preclear signature occupies or something similar to put a couple ID kbytes that would allow Unraid to recognize and ID drives that should participate in the classic unRAID parity array. If there is enough space, it could contain ID hashes of the rest of the drives in that set, so Unraid could easily determine what drives should be in what slots for a pool to have valid parity.

 

That way a fresh Unraid install could prepopulate any detected unRAID pools. Maybe even be able to do other pool types this way too.

 

It would be really nice to be able to download a fresh Unraid install and have it instantly recognize all the drives.

  • Like 4
Link to comment

As far as i know, unraid doesnt have a feature like an option to tell unraid, to keep a share on a specific drive, except when that drive is full then either write to the next drive or a specific other drive.

I know its possible to fill up a disk one after the other, and its possible to adjust the splitting, but telling a share to write data only to a specific drive easily is not possible as far as i know and that would be super helpful.

 

So for example i could have my movies on one drive, my series on another, my isos on the third drive, my nextcloud on the fourth. Now i add data to nextcloud this data lands on the first drive, if i havent set the correct split and fill-up settings, and even then, if the nextcloud drive is full, the files go randomly somwhere else or nowhere.

 

Isnt one of the main concepts of unraid to keep disks spun down? Then wouldnt it make sense like the way i explained it so only the nextcloud disk gets spun up and now like a bunch of other disks, just cause there are a few text documents laying on them?

 

I might be wrong, but this would be a great addition to unraid.

Link to comment

Now that Raid is coming to UnRaid and the way its heading is to different tiers/level or Array why not call it Arraid or something lol JK, and now that zfs and multiple array has come and possibly removing the drive limitation it opened up alot of opportunity for the company now it is possible to partner with hardware sellers e.g. 45 drives x unraid, who knows in the future we can see STORINATOR with Unraid option on their site.

  • Haha 1
Link to comment
4 hours ago, Joly0 said:

I know of that, but there you can only include/exclude disks, they still get filled up randomly per selection of the fill-up setting and the split setting and if that one disk is full, what happens next?

Then the share shows up as full because it is... 

If you set a given share to ge to 2 disks, with fill up strategy then it'll only write to one disk until it's full, then only to the 2nd, etc... there isn't really anything more that can be offered?

Edited by Kilrah
Link to comment

We just moved to TrueNAS Core (virtualised on Unraid) in September to support our bandwidth needs... Looking like it won't be long before we move back (Core sucks for the unfamiliar).

 

As a side note, having support for 30+ drives would be nice for us. Our ZFS pool is 24 drives and we have a JBOD case to add another 36 drives over the next 12 months. We can manage otherwise though.

  • Like 5
Link to comment
On 1/19/2023 at 12:13 AM, Congles said:

We just moved to TrueNAS Core (virtualised on Unraid) in September to support our bandwidth needs... Looking like it won't be long before we move back (Core sucks for the unfamiliar).

 

As a side note, having support for 30+ drives would be nice for us. Our ZFS pool is 24 drives and we have a JBOD case to add another 36 drives over the next 12 months. We can manage otherwise though.

for the look of it they dont want to support 30+ drives

Link to comment
5 hours ago, aniel said:

for the look of it they dont want to support 30+ drives

Yeah they keep doing surveys and the VAST majority of their customers don't use anything close to 30 drives.  Although some crazy individuals run unraid as dockers in unraid as a host.  I think unless youre running something super crazy 30 drives is more than enough.  Especially with the HDD manufacturers getting ready to release 30TB drives and scaling up to 120TB by 2030ish.   Also unless they add more parity drives running even 28 drives is kinda hairy. 

Edited by TheIlluminate
Link to comment
On 1/12/2023 at 12:19 PM, SimonF said:

Is this for transcoding rather than passthru?

Out of curiosity is there a reason you're asking?  Personally I'm going to run 2 A380s.  For short term both are going to be loaded into tdarr to roll through 50TB of movies, TV shows and anime.  Long term i'm going to split them and use either one for tdarr to manage new incoming media and the other for passthrough or plex.  Depends if i see a bonus for plex over my 11600k's iGPU. 

Link to comment
54 minutes ago, TheIlluminate said:

Out of curiosity is there a reason you're asking?  Personally I'm going to run 2 A380s.  For short term both are going to be loaded into tdarr to roll through 50TB of movies, TV shows and anime.  Long term i'm going to split them and use either one for tdarr to manage new incoming media and the other for passthrough or plex.  Depends if i see a bonus for plex over my 11600k's iGPU. 

It was to understand if Kernel support was needed. You can passthru a ARC GPU and that currently works on 6.11.5

Link to comment
4 minutes ago, usr.local said:

Nobody looks at that hidden sneaky ass fine print.  All they see is "Unlimited attached storage devices".  If they are going to limit it to 30 drives then they should clearly state it in the purchase agreement. @limetech

Well they could do this.

9d80d

But then what about all the other license details? They are important to.
The fact is you should always read the license details befor purchasing anything. If you dont do that then thats not Unraid's fault.

And its not like a 100 page wall of text you have to dig through. like some companies would provide. Its very clear and easy to understand.

  • Upvote 2
Link to comment
22 minutes ago, usr.local said:

If you go to the website to purchase Unraid:

 

https://unraid.net/pricing

 

It only tells you "Unlimited attached storage devices".

 

What you are linking to may give you more details, yet the web page which is the one that comes up when you click on Pricing only gives you the very limited information of "Unlimited attached storage devices".  It is a very deceiving sales.  @limetech

i agree, and they fact that pp are requesting this on their "next version" release post gives you something to talk about. 

Edited by aniel
  • Upvote 2
Link to comment

Well you can have up to 30 pools each with a large number of drives, so if you use that technique you can attach something of the order of 900 drives with current Unraid releases.   With support for ZFS in pools users may be inclined to think of doing that when they want large numbers of drives.

Link to comment
1 minute ago, aniel said:

so u can create pool with no data protection ?

btrfs already does protected pools, and zfs is precisely allowing more options for that, and it was already confirmed that you'd be able to have multiple pools equivalent to current array as well in the future...

Edited by Kilrah
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.