kubed_zero

Community Developer
  • Posts

    166
  • Joined

  • Last visited

Everything posted by kubed_zero

  1. Thanks for checking! No problem, feel free to use it.
  2. https://slackware.uk/search?p=%2F&q=glibc-2.37 https://slackware.uk/cumulative/slackware64-current/slackware64/l/glibc-2.37-x86_64-1.txz https://slackware.uk/cumulative/slackware64-current/slackware64/l/glibc-2.37-x86_64-2.txz https://slackware.uk/cumulative/slackware64-current/slackware64/l/glibc-2.37-x86_64-3.txz Behold! With some digging through that website, old versions can be found
  3. I think I was able to find everything I needed on https://slackware.uk/slackware/ but some general googling allowed me to find the older versions. I've also uploaded some of the older packages as part of this post: glibc-2.37-x86_64-2.txzopenssl-3.1.1-x86_64-1.txz Let me know if you need anything else!
  4. @gombihu see my post from above. Its instructions are working for me right now on 6.12.6. You missed a step with the fruit file. Good luck!
  5. I just finished building Python 3.12.1 using similar instructions to above. I now run "find usr/lib*/python* -name '*.so' | xargs strip --strip-unneeded" instead of "find . -print0 | xargs -0 file | grep -e "executable" -e "shared object" | grep ELF | cut -f 1 -d : | xargs strip --strip-unneeded 2> /dev/null || true" I used a longer list of packages installed at compile time, some of these not needed for Python but needed for BorgBackup I found that using newer versions of OpenSSL and glibc would allow for successful compilation, but failures during executions of Python with errors such as "python3: /lib64/libm.so.6: version GLIBC_2.38' not found (required by python3)" and sure enough, Unraid 6.12.6 has only "/lib64/libm-2.37.so" Python-3.12.1.tar.xz libffi-3.4.4-x86_64-1.txz acl-2.3.1-x86_64-1.txz libmpc-1.3.1-x86_64-1.txz binutils-2.41-x86_64-1.txz libzip-1.10.1-x86_64-1.txz bzip2-1.0.8-x86_64-3.txz lzlib-1.13-x86_64-1.txz expat-2.5.0-x86_64-1.txz make-4.4.1-x86_64-1.txz fuse3-3.15.0-x86_64-1.txz openssl-3.1.1-x86_64-1.txz gc-8.2.4-x86_64-1.txz openssl-solibs-3.1.1-x86_64-1.txz gcc-13.2.0-x86_64-1.txz openssl11-1.1.1w-x86_64-1.txz gcc-g++-13.2.0-x86_64-1.txz openssl11-solibs-1.1.1w-x86_64-1.txz gdbm-1.23-x86_64-1.txz pkg-config-0.29.2-x86_64-4.txz git-2.43.0-x86_64-1.txz readline-8.2.001-x86_64-1.txz glibc-2.37-x86_64-2.txz xz-5.4.3-x86_64-1.txz guile-3.0.9-x86_64-1.txz zlib-1.2.13-x86_64-1.txz kernel-headers-6.1.64-x86-1.txz I've attached the compiled python3-3.12.1 TXZ file, as well as the compiled pyfuse3-3.3.0 and borgbackup-1.2.7 WHL files that can be installed with Pip and are not otherwise available for Unraid. As far as I've tested in my use cases, they work as expected. Below are my notes on the Python Wheel compilation for borgbackup and pyfuse3: ## 2023-12-10 Pyfuse3 Update - Installing pyfuse3 before BorgBackup, as it's a dependency of Borg - Installing some Slackware packages first: ``` binutils-2.41-x86_64-1.txz glibc-2.37-x86_64-2.txz fuse3-3.16.2-x86_64-1.txz kernel-headers-6.1.64-x86-1.txz gcc-13.2.0-x86_64-1.txz pkg-config-0.29.2-x86_64-4.txz ``` - Then run `pip3 install pyfuse3` which is building 3.3.0 - It installs a few pip3 packages as dependencies: ``` trio-0.23.1-py3-none-any.whl attrs-23.1.0-py3-none-any.whl sortedcontainers-2.4.0-py2.py3-none-any.whl idna-3.6-py3-none-any.whl outcome-1.3.0.post0-py2.py3-none-any.whl sniffio-1.3.0-py3-none-any.whl Successfully installed attrs-23.1.0 idna-3.6 outcome-1.3.0.post0 pyfuse3-3.3.0 sniffio-1.3.0 sortedcontainers-2.4.0 trio-0.23.1 ``` - Permanently fetch all these packages (other than pyfuse3) with `pip3 download attrs==23.1.0 idna==3.6 outcome==1.3.0.post0 sniffio==1.3.0 sortedcontainers==2.4.0 trio==0.23.1` and put them in `/boot/python_wheels/` for offline installation at boot time - Copy the `pyfuse3` wheel file from the directory stated in the build logs `cp /root/.cache/pip/wheels/b0/c1/b2/bd1f9969742c3b690d74ae13233287eec544c5a9135497443e/pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl /boot/python_wheels/` - This would have appeared to be something similar to `Created wheel for pyfuse3: filename=pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl size=1283409 sha256=8471a14517ee73366b6b7fd7dc2921c57327830f0bb2d58a4569abf0e2ca50ad Stored in directory: /root/.cache/pip/wheels/b0/c1/b2/bd1f9969742c3b690d74ae13233287eec544c5a9135497443e` - Note that renaming needs to follow some strict guidelines https://peps.python.org/pep-0491/#file-name-convention otherwise Pip will error out and give responses such as `ERROR: pyfuse3-3.3.0-cp312-cp312-linux_x86_64-kubed20231210.whl is not a valid wheel filename` or `ERROR: pyfuse3-3.3.0-cp312-cp312-linux_x86_64_kubed20231210.whl is not a supported wheel on this platform` - Add a line to the `/boot/config/go` file to automatically install `pyfuse3` at boot, or just run it adhoc to confirm it works: `/usr/bin/pip3 install /boot/python_wheels/pyfuse3* --no-index --find-links file:///boot/python_wheels` ## 2023-12-10 Borg 1.2.7 Update - OK, I'm now running Python 3.12.1 as compiled above - I also have pyfuse3 installed in Pip in preparation - (Sidenote) https://forums.unraid.net/topic/129200-plug-in-nerdtools/?do=findComment&comment=1291205 are my old instructions for compiling Borg - (Sidenote) there's now a SlackBuild for installing Borg on Slackware! Updated in 2023Q3 which is more recent than the last time I updated this https://slackbuilds.org/slackbuilds/15.0/system/borgbackup/borgbackup.SlackBuild - Following https://borgbackup.readthedocs.io/en/stable/installation.html#pip-installation generally - First it notes to follow https://borgbackup.readthedocs.io/en/stable/installation.html#source-install and get some Slackware dependencies installed - Intentionally skipping the installation of `libxxhash` and `libzstd` and `liblz4` as the desire is to use the bundled code instead of the system-provided libraries - The instructions call for `acl` and `pkg-config` only, but the others are needed based on trial and error ``` acl-2.3.1-x86_64-1.txz glibc-2.37-x86_64-2.txz binutils-2.41-x86_64-1.txz kernel-headers-6.1.64-x86-1.txz gcc-13.2.0-x86_64-1.txz pkg-config-0.29.2-x86_64-4.txz ``` - Then it asks to install some Pip dependencies JUST for the build process: `wheel`, `setuptools`, `pkgconfig` - This can be done with `pip3 download pkgconfig setuptools wheel && pip3 install *.whl` ``` pkgconfig-1.5.5-py3-none-any.whl wheel-0.42.0-py3-none-any.whl setuptools-69.0.2-py3-none-any.whl ``` - Now run `pip3 install "borgbackup[pyfuse3]"` to install borgbackup and build the wheel. Note that this adds `pyfuse3` integration, but that can be skipped by running the simplified install command `pip3 install borgbackup` - It installs a few pip3 packages as dependencies: ``` msgpack-1.0.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl packaging-23.2-py3-none-any.whl Successfully installed borgbackup-1.2.7 msgpack-1.0.7 packaging-23.2 ``` - Permanently fetch these packages (other than borgbackup) with `pip3 download msgpack==1.0.7 packaging==23.2` and put them in `/boot/python_wheels/` for offline installation at boot time - Copy the `borgbackup` wheel file from the directory stated in the build logs `cp /root/.cache/pip/wheels/b0/c1/b2/bd1f9969742c3b690d74ae13233287eec544c5a9135497443e/pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl /boot/python_wheels/` - This would have appeared to be something similar to `Created wheel for borgbackup: filename=borgbackup-1.2.7-cp312-cp312-linux_x86_64.whl size=6248226 sha256=830ec3fbcc5c74922f7e269e8378aa08052c8c9cec51f407c53166d989d62e7c Stored in directory: /root/.cache/pip/wheels/64/90/02/c60bc19558d5e1e7c5ed13041bc27d25d41518ddfdb2def852` - Note that renaming needs to follow some strict guidelines https://peps.python.org/pep-0491/#file-name-convention otherwise Pip will error out - Add a line to the `/boot/config/go` file to automatically install `pyfuse3` at boot, or just run it adhoc to confirm it works: `/usr/bin/pip3 install /boot/python_wheels/borgbackup* --no-index --find-links file:///boot/python_wheels` - The final list of wheel files ready to install at boot time are: ``` attrs-23.1.0-py3-none-any.whl borgbackup-1.2.7-cp312-cp312-linux_x86_64.whl idna-3.6-py3-none-any.whl msgpack-1.0.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl outcome-1.3.0.post0-py2.py3-none-any.whl packaging-23.2-py3-none-any.whl pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl sniffio-1.3.0-py3-none-any.whl sortedcontainers-2.4.0-py2.py3-none-any.whl trio-0.23.1-py3-none-any.whl ``` pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl python3-3.12.1-x86_64-1-kubed20231210.txz borgbackup-1.2.7-cp312-cp312-linux_x86_64.whl
  6. There's been no activity on the GitHub repository since I posted my PR 3 months ago https://github.com/UnRAIDES/unRAID-NerdTools/pull/84. I suspect the maintainers UnRAIDES and/or EUGENI_CAT have abandoned this effort (August 24 is the last date EUGENI_CAT logged into this forum)
  7. I'm not aware of any reason for it not to be added. The repo owner seems to be MIA, not responding to the PR, not updating the repo, not commenting here. I have notes in my PR for installing the necessary compiling tools to compile on Unraid https://github.com/UnRAIDES/unRAID-NerdTools/pull/84 I also made an application I call DiffLens, which uses Blake3 hashes instead of MD5 hashes to do file integrity validations on Unraid shares https://github.com/UnRAIDES/unRAID-NerdTools/pull/84
  8. I can't say for certain, this is not a scenario I can test another VM on the machine is providing me network access to Unraid. I want to emphasize though that in both of these cases, Unraid ran fine as a VM for 3-5 years, and it's only in the past few months that I've been seeing this.
  9. I tried searching around but couldn't find much on this. As long as I don't reboot Unraid, it won't have parity errors. Most recently, I ran the parity three times in a row without rebooting, and it reported 0 sync errors each time. I then rebooted (safely, stopping the array and then rebooting) and started another parity check, and now there are errors. I see a consistent 1025 errors on this particular Unraid box when errors do pop up, which is suspicious. Looking at the parity check history, there are only ever errors after a reboot. Parity History, showing the last three runs had zero errors: I then immediately rebooted and started another parity check, taking this screenshot: This is a Supermicro motherboard with ECC RAM. Unraid is running as an ESXi 6.7 VM, with the Intel SATA controller passed through to the VM I have a second Unraid server with the same setup (albeit newer hardware and newer ESXi), and it does something similar: 0 errors on repeated Parity Check operations, but the second I reboot and try to run a parity check, it'll start finding errors. Both Unraid systems have been running as VMs for a few years, and did not always have this issue. I've tried running New Permissions just in case something wacky happened to some of the files, but that did not help. Diagnostics from both systems attached. The parity check errors can be seen in one of them. I can grab new/better diagnostics later if need be. I'm looking for help in troubleshooting next steps, as this leaves me less confident in restoring valid data should a drive fail. I did find this blog post https://blog.insanegenius.com/2020/01/10/unraid-repeat-parity-errors-on-reboot/ which has the same errors I did, "Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934168" but I don't think it's relevant in this case as I'm just using the SATA ports directly on the motherboards, without using any LSI or SAS/HBA cards. diagnostics.zip
  10. I have Time Machine working on Unraid 6.12.3 and macOS Ventura 13.5.1, and wanted to make sure the historical threads were updated with what my solution ended up being. Adding to both the SMB Extras global settings AND the smb-fruit settings ended up being necessary.
  11. I have Time Machine fully working on Unraid 6.12.3 and macOS Ventura 13.5.1, and wanted to make sure the historical threads were updated with what my solution ended up being. Adding to both the SMB Extras global settings AND the smb-fruit settings ended up being necessary.
  12. I wanted to share that I too have mostly figured out the Samba macOS nuance. I'm on Unraid 6.12.3 with macOS Ventura 13.5.1 To start, setting fruit:metadata = stream in the SMB Extras in the Unraid UI was the single biggest contributor to getting things working. Here's exactly what I have, in its entirety: [Global] fruit:metadata = stream Note that I don't use Unassigned Devices, which I think would add to these lines. After adding this and stopping/starting the array, pre-existing Time Machine backups were NOT working reliably, so I also had to create new Time Machine backups from scratch. I kept the old sparsebundles around just in case. Once new initial backups were made successfully, one of my MacBooks was able to reliably back up on a daily cadence. It's been running this way for a couple months. Meanwhile, one of my other MacBooks refused to work well with Time Machine, making one successful backup every few weeks, contingent on a recent Unraid reboot. I couldn't deal with this, so I factory reset it (reinstalling macOS) and created an additional new Time Machine backup on Unraid. Then it worked flawlessly. Then one of my MacBooks died, so I needed to restore data from Time Machine. I first tried to connect to Unraid and mount the sparsebundle through Finder, but it would time out, beachball, and overall never end up working. I was however able to get it mounted and accessible through the Terminal/CLI using the command `hdiutil attach /Volumes/path/to/sparsebundle` and with that, access the list of Time Machine snapshots and the files I wanted to recover. Then, I tried to use Apple's Migration Assistant to attempt to fully restore from a Time Machine backup. I was able to connect to the Unraid share and it was able to list the sparsebundles, but it would get stuck with "Loading Backup..." indefinitely. I moved some of the other computers' sparsebundles out of the share so it could focus on just the one sparsebundle I wanted, but even after waiting 24 hours, it would still say that it was loading backups. Looking on the Open Files plugin's tab in Unraid, I would see it reading one band file at a time. After enough of this, I tried to access a different sparsebundle that only had two backups, instead of months of backups, and "Loading Backups..." went away within 10 minutes and I was able to proceed with the Time Machine restoration, albeit slowly, and not with the data I wanted. This did clue me in to something, though. Using `find /path/to/sparsebundle/bands/ -type f | wc -l` to get the file count inside the sparsebundle, the one that made it through Migration Assistant was only 111 files, and the one that stalled for 24h was over 9000 files. I then went back to the Unraid SMB settings and tried to fiddle around a bit more. I found, as others did, that changing the following settings in smb-fruit.conf caused big improvements. The defaults for these settings are `yes` so I changed them to `no`: readdir_attr:aapl_rsize = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_max_access = no As the Samba Fruit man page notes in https://www.samba.org/samba/docs/current/man-html/vfs_fruit.8.html, `readdir_attr:aapl_max_access = no` is probably the most significant of these, as the setting is described: "Return the user's effective maximum permissions in SMB2 FIND responses. This is an expensive computation. Enabled by default" My suspicion is that the thousands of files that make up a sparsebundle end up getting bottlenecked when read through Samba, causing Migration Assistant to fail. After adding these lines to `/etc/samba/smb-fruit.conf` and copying that updated file over to `/boot/config/smb-fruit.conf` and stopping and starting the array, I confirmed the settings were applied with `testparm -s` and looking at the output: [global] ~~~shortened~~~ fruit:metadata = stream fruit:nfs_aces = No ~~~shortened~~~ [TimeMachine] path = /mnt/user/TimeMachine valid users = backup vfs objects = catia fruit streams_xattr write list = backup fruit:time machine max size = 1250000M fruit:time machine = yes readdir_attr:aapl_max_access = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_rsize = no fruit:encoding = native Now that the new settings were in place, Migration Assistant got through the "Loading Backups" stage within a minute or two, and I was able to successfully fully restore the old backup sparseimage with thousands of files. I know there's some nuance around setting Apple/fruit settings depending on the first device to connect to Samba, so this entire experiment took place with only Macs connecting to Unraid. I did not yet repeat the experiment with Windows connecting first or in parallel, but I hope the behavior is the same as I cannot guarantee Macs will always connect before Windows computers in my network. Anyway, I wanted to share as I avoided updating Unraid 6.9.2 for literal years to keep a working Time Machine backup. I then jumped for joy at the MacOS improvements forum post a year ago just to find it didn't help in any way, and was again excited to update to 6.12, just to find it STILL didn't work reliably with default settings. Very disappointing, LimeTech. And a huge thanks to the folks in these threads that have shared their updates and what has or has not worked for them. Let's keep that tradition going, as it's clear we are on our own here. Some Time Machine related posts from over the years. I'll make update posts in each directing here. TLDR: Working Time Machine integration. Adding fruit:metadata = stream to the global settings and then readdir_attr:aapl_max_access = no and readdir_attr:aapl_finder_info = no and readdir_attr:aapl_rsize = no into the smb-fruit settings allowed me to run Time Machine backups AND restore from or mount them using Finder and Migration Assistant.
  13. cross-linking to the other feature requests asking for the same thing
  14. cross-linking to the other feature requests asking for the same thing
  15. cross-linking to the other feature requests asking for the same thing
  16. cross-linking to the other feature requests asking for the same thing
  17. Right now, any disk's Identity page allows defining a YYYYMMDD for purchase date and a YYYYMMDD for manufacture date. There is a category for Warranty Period, but it is preconfigured to only allow from 6-50 months. There is no option to add a YYYYMMDD, and no option to add a custom number of months. This should be a straightforward UI change to allow for more than just the pre-configured dropdown. This would be useful because Seagate, Western Digital, and I'm sure other HDD manufacturers all have warranty pages that state the exact end date of warranty periods. For those buying used drives or RMA'd drives, warranty expiry does not always align nicely with manufacture date or purchase date. Thanks for considering this change!
  18. And as a side note, I was able to get Borg and PyFuse3 Pip wheels compiled and functional with no additional system packages needed. They're attached here. They can be installed with "pip3 install /path/to/wheel" and can be used immediately after. Note that pyfuse3 is only needed if a Borg repository needs to be mounted. Otherwise it can be skipped. This falls out of the purview of NerdTools, but I figure it would be of interest to the group here since the Borgbackup system package is the only other way of getting it working on Unraid. This is easborgbackup-1.2.4-cp311-cp311-linux_x86_64.whlpyfuse3-3.2.3-cp311-cp311-linux_x86_64.whlier and more reliable, in my opinion, as Pip manages the dependency installation and upgrading can be done by replacing only the Wheel file. Here are my development notes: ## Compiling Borg 1.2.4 on Unraid 6.12.3 - Starting with a fresh reboot, plus the openSSL and python 3.11.4 package installations I figured out earlier - `pip3 install borgbackup` - Installing the following packages as precursors pkg-config-0.29.2-x86_64-4.txz gcc-13.2.0-x86_64-1.txz glibc-2.37-x86_64-2.txz kernel-headers-6.1.42-x86-1.txz binutils-2.41-x86_64-1.txz acl-2.3.1-x86_64-1.txz python: msgpack-1.0.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl packaging-23.1-py3-none-any.whl pip 23.2.1 setuptools 65.5.0 - Errors seen and their package fix if there was an issue: - `OSError: pkg-config probably not installed: FileNotFoundError(2, 'No such file or directory')` is fixed by installing pkg-config - `error: command 'gcc' failed: No such file or directory` is fixed by installing gcc - `/usr/include/openssl/evp.h:22:12: fatal error: stdio.h: No such file or directory` is fixed by installing glibc - `/usr/include/bits/local_lim.h:38:10: fatal error: linux/limits.h: No such file or directory` is fixed by kernel-headers - `gcc: fatal error: cannot execute ‘as’: execvp: No such file or directory` is fixed by installing binutils - ` src/borg/platform/linux.c:1104:10: fatal error: sys/acl.h: No such file or directory` is fixed by installing acl - That was it. After installing those dependencies, the build succeeded. I rebooted and reinstalled to confirm. Compiling PyFUSE3 on Unraid 6.12.3 - Starting with a fresh reboot, plus the openSSL and python 3.11.4 package installations I figured out earlier - `pip3 install pyfuse3` - Installing the following packages as precursors pkg-config-0.29.2-x86_64-4.txz fuse3-3.15.0-x86_64-1.txz gcc-13.2.0-x86_64-1.txz glibc-2.37-x86_64-2.txz kernel-headers-6.1.42-x86-1.txz binutils-2.41-x86_64-1.txz python: trio-0.22.2-py3-none-any.whl attrs-23.1.0-py3-none-any.whl sortedcontainers-2.4.0-py2.py3-none-any.whl idna-3.4-py3-none-any.whl outcome-1.2.0-py2.py3-none-any.whl sniffio-1.3.0-py3-none-any.whl - Errors seen and their package fix if there was an issue: - `FileNotFoundError: [Errno 2] No such file or directory: 'pkg-config'` fixed by installing pkg-config - `Package fuse3 was not found in the pkg-config search path` fixed by installing fuse3 - `error: command 'gcc' failed: No such file or directory` fixed by installing gcc - `/usr/include/python3.11/Python.h:23:12: fatal error: stdlib.h: No such file or directory` fixed by installing glibc - `/usr/include/bits/errno.h:26:11: fatal error: linux/errno.h: No such file or directory` fixed by installing kernel-headers - `gcc: fatal error: cannot execute ‘as’: execvp: No such file or directory` fixed by installing binutils - And then it built! I rebooted and confirmed it worked as well. Seems good.
  19. I've submitted a PR to the NerdTools GitHub repo to upgrade Python 3.9 to Python 3.11. https://github.com/UnRAIDES/unRAID-NerdTools/pull/84 As far as I could tell, not a single person had shared a Python 3.11 build for Slackware anywhere on the internet, so I took it upon myself to get an Unraid development environment functional so I could compile Python 3.11 specially for Unraid. I also included Setuptools 65.5.0 and Pip 23.2.1 as part of the package, so there is no longer a need to install them separately. They can always be updated using Pip if desired, for example "pip3 install --upgrade pip" I've tested it with a couple different Python projects, including my Difflens project as well as Borgbackup 1.2.4, and as far as I can see it's working well for my uses. There are a few modules that are not compiled into the Python package. For example, OpenSSL now needs >1.1.1 to be installed on the system as of Python 3.10, which Unraid does not meet by default. For that reason, the OpenSSL package will need to be installed alongside the Python package for Python/Pip to work correctly. I've updated the NerdTools dependency list to include OpenSSL, and it should get installed automatically when installing Python 3.11. Some other modules that I didn't compile were "`_dbm _tkinter _uuid nis readline" so they will need to be installed via Pip or another method if necessary. See for more detail on development
  20. All right, I got things working. Starting with Python 3.10, OpenSSL 1.1.1 or newer is required https://peps.python.org/pep-0644/ which means the OpenSSL 1.1 installed by default in Unraid 6.12.3 does not meet the requirements. In other words, using Python 3.10 or newer will require OpenSSL to also be installed as a prerequisite. I got OpenSSL 3.1.2 from https://slackware.uk/slackware/slackware64-current/slackware64/n/openssl-3.1.2-x86_64-1.txz and added it to the /boot/extra/ directory to install at boot. I also spent a bunch of time working through the different settings and libraries needed to configure and build/compile Python 3.11.4 on Unraid. I eventually succeeded, and am happy to share the 3.11.4 TXZ file. I put this in the /boot/extra/ directory alongside OpenSSL and am happy to report that Python seems to be working for my needs. I included Pip in the TXZ as well, so Pip will install automatically next to Python and does not need a separate package to install As far as I'm aware, I'm the only person to have successfully/publicly shared a Python 3.10 or 3.11 built for Slackware or Unraid. With that in mind, here are some notes I took along the way: I read through http://www.slackware.com/~alien/slackbuilds/python3/build/python3.SlackBuild for inspiration for a SlackBuild file, but did not fully end up using it My required package dependency for building steps - gcc https://slackware.uk/slackware/slackware64-current/slackware64/d/gcc-13.2.0-x86_64-1.txz and `installpkg gcc-13.2.0-x86_64-1.txz ` to fix `configure: error: no acceptable C compiler found in $PATH` - binutils https://slackware.uk/slackware/slackware64-current/slackware64/d/binutils-2.41-x86_64-1.txz to fix `configure: error: C compiler cannot create executables` which stems from `gcc: fatal error: cannot execute 'as': execvp: No such file or directory` https://stackoverflow.com/questions/56801179/fatal-error-cannot-execute-as-execvp-no-such-file-or-directory - glibc https://slackware.uk/slackware/slackware64-current/slackware64/l/glibc-2.37-x86_64-2.txz to fix `configure: error: C compiler cannot create executables` with sub-error `/ld: cannot find crt1.o: No such file or directory` - kernel-headers https://slackware.uk/slackware/slackware64-current/slackware64/d/kernel-headers-6.1.42-x86-1.txz to fix `C preprocessor "/lib/cpp" fails sanity check` with sub-error `linux/limits.h: No such file or directory` - **Everything above here is the minimal set needed to get `./configure` running end to end** - make https://slackware.uk/slackware/slackware64-current/slackware64/d/make-4.4.1-x86_64-1.txz obviously just to run make - guile https://slackware.uk/slackware/slackware64-current/slackware64/d/guile-3.0.9-x86_64-1.txz to fix the make-time error `make: error while loading shared libraries: libguile-3.0.so.1: cannot open shared object file: No such file or directory` - gc https://slackware.uk/slackware/slackware64-current/slackware64/l/gc-8.2.4-x86_64-1.txz to fix the make-time error `make: error while loading shared libraries: libgc.so.1: cannot open shared object file: No such file or directory` - zlib https://slackware.uk/slackware/slackware64-current/slackware64/l/zlib-1.2.13-x86_64-1.txz RIP after 45m of compiling I saw an error `zipimport.ZipImportError: can't decompress data; zlib not available` - libffi https://slackware.uk/slackware/slackware64-current/slackware64/l/libffi-3.4.4-x86_64-1.txz to fix compilation error `Failed to build these modules: _ctypes` and `_ctypes.c:118:10: fatal error: ffi.h: No such file or directory` - openssl (not openssl11, which corresponds to 1.1 and not the 3.x of normal openssl) https://slackware.uk/slackware/slackware64-current/slackware64/n/openssl-3.1.1-x86_64-1.txz to let `./configure` know that SSL exists so that hopefully Python builds the SSL module `checking for openssl/ssl.h in /usr... yes / checking whether compiling and linking against OpenSSL works... yes` - I see `/usr/bin/openssl` and `/usr/lib64/libevent_openssl.so` and `/usr/lib64/libssl.so` seem to exist on the test Unraid, so maybe it's safe to assume it's available by default? I suspect pkg-config will aid in detecting that it's available - Nope, I see `checking whether compiling and linking against OpenSSL works... no` when running `/configure` - Yeah, even when running make I see the same `Could not build the ssl module! Python requires a OpenSSL 1.1.1 or newer` error - I see that `./configure` has a log line `checking for stdlib extension module _ssl... missing` which makes me suspect that SSL won't be built into this Python package installer, and thus pip is going to fail once I get to that point. I bet I could get around that by installing SSL using the instructions from earlier, but then I suspect it won't really be bundled and will instead just look to the system for an SSL implementation. That said, I wonder if I could install pkg-config AND SSL and set the https://docs.python.org/3/using/configure.html#cmdoption-with-openssl-rpath `--with-openssl-rpath=auto` flag during configuration and then it would just use whatever SSL is available? The alternative is to just build this, realize SSL is missing, install the SSL package as well, and call it a day. I think it should technically be fine if the python installer also depends on SSL - pkg-config to resolve a general compiler warning https://slackware.uk/slackware/slackware64-current/slackware64/d/pkg-config-0.29.2-x86_64-4.txz that `WARNING: pkg-config is missing. Some dependencies may not be detected correctly.` - bzip2 https://slackware.uk/slackware/slackware64-current/slackware64/a/bzip2-1.0.8-x86_64-3.txz to fix `checking for stdlib extension module _bz2... missing` in the `./configure` - xz https://slackware.uk/slackware/slackware64-current/slackware64/a/xz-5.4.3-x86_64-1.txz to fix to fix `checking for stdlib extension module _lzma... missing` in the `./configure` - gdbm https://slackware.uk/slackware/slackware64-current/slackware64/l/gdbm-1.23-x86_64-1.txz to fix `checking for stdlib extension module _gdbm... missing` in the `./configure` - ncurses https://slackware.uk/slackware/slackware64-current/slackware64/l/ncurses-6.4_20230610-x86_64-1.txz to fix failed compilation of the `_ncurses` module **My steps** - Download python 3 source in xz format to `/boot/buildPython/Python-3.11.4.tar.xz` - Create a temp directory to do work in, and move there: `rm -rf /tmp/buildPython | exit 0 && mkdir /tmp/buildPython && cd /tmp/buildPython` - Install the dependencies needed to run `./configure` and `make` in `/boot/buildPython` - Extract the Python source to the temp directory `tar xf /boot/buildPython/Python-3.11.4.tar.xz` which creates a subdirectory `/tmp/buildPython/Python-3.11.4/` - Move into that subdirectory `cd /tmp/buildPython/Python-3.11.4/` - Try to run `./configure --build=x86_64-slackware-linux --with-ensurepip=upgrade --prefix=/usr --libdir=/usr/lib64 --with-platlibdir=lib64 --enable-optimizations --with-lto --with-pkg-config=yes --disable-test-modules --without-static-libpython` - Run make with multiple jobs (assuming multiple CPUs) to compile faster: `make -j6` - Clear and create the directory that `make install` will output into `rm -rf /tmp/package-python-make-output | exit 0 && mkdir -p /tmp/package-python-make-output` - Run make with `make install DESTDIR=/tmp/package-python-make-output` - Now remove some of the files in `/tmp/package-python-make-output` that should not be packaged: - `cd /tmp/package-python-make-output/` - `find . \( -name '*.exe' -o -name '*.bat' \) -exec rm -f '{}' \+` to remove windows stuff - `find . -type d -exec chmod 755 "{}" \+` to update permissions - `find . -perm 640 -exec chmod 644 "{}" \+` - `find . -perm 750 -exec chmod 755 "{}" \+` - `find . -print0 | xargs -0 file | grep -e "executable" -e "shared object" | grep ELF | cut -f 1 -d : | xargs strip --strip-unneeded 2> /dev/null || true` I have no idea what this does but it seemed to find a couple files such as `./usr/lib64/python3.11/lib-dynload/termios.cpython-311-x86_64-linux-gnu.so` or `./usr/lib64/python3.11/lib-dynload/_codecs_tw.cpython-311-x86_64-linux-gnu.so` so maybe it's removing extended attributes? IDK - `strip -s usr/lib/* usr/lib64/* usr/bin/*` as recommended by https://docs.slackware.com/howtos:slackware_admin:building_a_package - `mkdir install && cd install && wget https://www.slackbuilds.org/slackbuilds/14.2/python/python3/slack-desc` to add a `slack-desc` file to the package - edit the usr/bin/pip3 and other pip files to point to python3 instead of just python, as this package doesn't install a default python - `cd /tmp/package-python-make-output && /sbin/makepkg -l y -c n /tmp/python3-3.11.4-x86_64-1.txz` to actually make the output Slackware package using all the files in the `/tmp/package-python-make-output` directory python3-3.11.4-x86_64-20230802kubedwithpip.txz
  21. I've been trying to get Python 3.11 installed on Unraid 6.11 or 6.12. I looked around for pre-compiled TXZ packages, but only found 3.9.x packages from @dlandon in https://github.com/dlandon/python3 and @EUGENI_CAT in https://github.com/UnRAIDES/unRAID-NerdTools/blob/main/packages/6.11/python3-3.9.16-x86_64-1.txz Usually for Slackware package compilation, a SlackBuild file is provided to configure the system before calling the package's make file. The newest SlackBuild I could find out there was for 3.9.5 http://www.slackware.com/~alien/slackbuilds/python3/build/python3.SlackBuild and it is not Unraid specific. I was able to install a bunch of support packages on Unraid 6.12.3 and get the SlackBuild working: Python-3.11.4.tar.xz libmpc-1.3.1-x86_64-1.txz binutils-2.40-x86_64-1.txz libzip-1.10.0-x86_64-1.txz expat-2.5.0-x86_64-1.txz lzlib-1.13-x86_64-1.txz gc-8.2.4-x86_64-1.txz make-4.4.1-x86_64-1.txz gcc-13.1.0-x86_64-2.txz openssl-3.1.1-x86_64-1.txz gcc-g++-13.1.0-x86_64-2.txz openssl-solibs-3.1.1-x86_64-1.txz git-2.41.0-x86_64-1.txz openssl11-1.1.1u-x86_64-1.txz glibc-2.37-x86_64-2.txz openssl11-solibs-1.1.1u-x86_64-1.txz guile-3.0.9-x86_64-1.txz pkg-config-0.29.2-x86_64-4.txz kernel-headers-6.1.35-x86-1.txz python3.SlackBuild libffi-3.4.4-x86_64-1.txz zlib-1.2.13-x86_64-1.txz However, I've had some issues after installing this custom-built Python. I bundled Pip into this install since that's supported with a makefile flag, but I started seeing "Can't connect to HTTPS URL because the SSL module is not available." after rebooting Unraid. I suspect this is because the SSL package and system libraries were installed during compilation, so Python doesn't reference its internal SSL library and instead uses the system library, which after an Unraid reboot no longer exists. I've seen a couple other issues along this line as well. I'm hoping someone here is aware of an Unraid-compatible Python 3.11 TXZ file, or a SlackBuild file that could get Python compiled and functional on Unraid. Or if someone here simply has more experience with compiling/SlackBuild/makefiles on Unraid, maybe that would also work. Thanks!
  22. Nice. Just in case you weren't aware, the modifications will be reset on each reboot, so you'll probably want to make a copy of the modified script on the /boot USB drive and then reference that script in your SNMP config instead of the default one. Also, if your fix is generalized, feel free to submit a pull request and I can see about adding this for wider usage 😇
  23. I wanted to update to say I've gotten things semi-reliably working. I'm using macOS Ventura 13.4.1 and macOS Sonoma 14 Public Beta 1, connecting to Unraid 6.12.3. On Unraid, the only modification I made was to add the typical recommendation to the SMB Extra settings in the Unraid Settings>SMB>Samba extra configuration: [Global] fruit:metadata = stream I created a new Share and started new Time Machine backups, which succeeded. Incremental backups work more reliably from some Macs than others. Rebooting the Mac won't help get it working again. Part of what I think has helped is to make sure the Mac is the first thing to connect to Unraid after a reboot. AKA if I get the typical failure to connect to Time Machine to back up, I Reboot Unraid Run the TM backup or just connect to Unraid from the Mac, before any Windows computer gets a chance Run the backup, hopefully succeeding It seems to me that the key is making sure the Mac is the first device to connect to Samba, and ensuring the fruit:metadata = stream customization is present. I haven't found any of the other smb-fruit config options to affect the ability to back up.
  24. That is an Unraid-specific executable as far as I'm aware, so I think trying to get it showing up there would be a dead end. On the other hand, you don't actually need mdcmd to get disk temp. That responsibility is held by smartctl, which is not unraid-specific. You just give it a disk mount point (aka /dev/sdb) and then it will report back temperature and SMART information https://github.com/kubedzero/unraid-snmp/blob/main/source/usr/local/emhttp/plugins/snmp/disk_temps.sh#L120 So you could make a script that determines what the non-pool drives are mounted as, and then calls smartctl for those /dev/sdX paths. And then if you added that script to your SNMP config, it would start to show up in SNMP. smartctl can actually do a scan for attached devices, aka `smartctl --scan` so you could always start there as well. Hope that makes sense! Feel free to browse through the Github code for the disk temp script for further insight.
  25. There is a separate standalone script that parses the output of "mdcmd status" to determine what disks are installed: https://github.com/kubedzero/unraid-snmp/blob/main/source/usr/local/emhttp/plugins/snmp/disk_temps.sh#L52C18-L52C30 here is some more information on the output of the mdcmd command The SNMP configuration only will adjust the free/used disk space output and won't add on disk temperature.