Jump to content

alexdodd

Members
  • Posts

    98
  • Joined

  • Last visited

Posts posted by alexdodd

  1. In a long running effort that I dip in and out of once every while*, i'd like to try and install these 

    i2c-tools,  python-smbus,  python3-smbus,  libi2c-dev

    so I can run a python script to tame my fans in my X8DTH and make them little nicer on my ears.  The other solution is to upgrade the motherboard, and i'd be down with that but it's a whole lot of hassle if I could just get the fans to behave somewhat via software. 

    I have i2c-tools ticked off because it was on https://slackware.pkgs.org/

     

    The others, i started with libi2c-dev, it's not available pre-compiled, so i had a go at libi2c-dev from source and using the included autogen.sh to compile / make.
    After installing,

    autoconf-2.69-noarch-1, automake-1.15-noarch-1, m4-1.4.17-x86_64-1

     i'm still running into this now:

     

    aclocal: warning: couldn't open directory 'm4': No such file or directory
    configure.ac:20: error: possibly undefined macro: AC_PROG_LIBTOOL
          If this token and others are legitimate, please use m4_pattern_allow.
          See the Autoconf documentation.
    autoreconf: /usr/bin/autoconf failed with exit status: 1
    Error during autoreconf

     

    At which point i feel like i'm pretty deep down a rabbit hole and i'm not entirely sure what harm I can cause.... 

     

    Surely i'm not the first to give this a go, or there is a simpler method i've overlooked?



    *I just checked it looks like last December, and then may before that 😂

  2. After mucho searching, i can't figure out how one would use what is available to me from the interwebs to download a precompiled package and replace the necessary files or install within unraid?  Or even anything associated to this. 

     

    Current versions of slackware already have smartmontools 7.2 included, so unraid is built on top of something earlier to that, i checked version and it states 14.2+ 

    Any pointers for me to read up on, i've hit a dead end here.

    Slackware is obviously without package manager, and smartmontools doesnt appear to have any precompiled binaries, or anything helpful in their repositories section or than an email for the slackware maintainer.  I cant imagine they would care much for my problem :D

     

     

  3. Been doing the a lot of upkeep recently.  Overhauled my whole unraid, upgraded hardware/software (6.9rc2 for cache pools), and the usual.

     

    I have been installing cache disks, moving lots of files around, reorganising, adding dual parity and various other things.

     

    Everything seems good, but over that process i've collected 41 errors on just the one drive in doing so, they all seemed to come at once and haven't incremented whilst i was adding the second parity and recreating.  I think it might be cable related, but im not sure how to confirm that?

     

    My drives lack temps and SMART within unRaid because they are SAS drives I believe.  I've run a short test, but i'm not really sure what i should be looking for where I should collect extra stats or read results run, or anything related to that. 

     

    I have scruitiny and it seems happy with everything, but the SMART data seems very minimal none the less (attached too).

     

    Previously I have been using an earlier version of unraid on advice from you guys here because unraid doesnt seem to support SAS drives 100%.  But now i've upgraded, i'm a bit in the dark?

    Diagnostics attached and specific drive in question separate too.

     

    I have a spare 2tb drive precleared ready on hot swap duty, and the data on the drive with errors is of zero concequence (its the movies drive), so i'm not precious nor really care, and i now have my two parity drives too, i just wondered if there is anything else worth looking for or what i should do to monitor this?

     

     

    Cheers!

    Screenshot 2021-01-06 at 01.10.58.png

    alexserver-diagnostics-20210106-0101.zip ST32000444SS_9WM5ZE6E0000C1435EES_35000c50034417483-20210106-0058.txt

  4. Is it still the case, it seems to be unless its another issue, that smartmontools and the unraid front end GUI still dont play nice with SAS drives? 

     

    I downgraded last time, but i've bitten the bullet and moved forward to 6.9 so I can use cache pools.  Now I'm SMART-less.  Unless I manually run everything, which isn't ideal i'm sure i'm not the only person missing SMART on the webgui? Although it might be my fault, i might have set something else somewhere.

  5. 1 hour ago, saarg said:

     

    As long as you choose the new path in the appdata field in the container, you can move the files around as you want.

    If you are only using the cache drive you should set it to cache only. Cache prefer will move the files in some situations to the array.

    Yeah I set it to cache prefer only to move the bulk of things, and now its set back to cache only, and i'll manually move these files from the disk1 to static, wish me luck :D

    The appdata path has never changed, just the underlying physical disk, which i'm not even convinced letsencrypt knows about, so this "move" isn't what the readme is referencing.

     

    //Edit:  Sweet, nothing exploded, certs still valid, and everything still works! 

  6. Am I being an idiot? (50/50)

     

    In metadata manager > Refresh metadata > Refresh mode: Replace all metadate

    It says action queued, but I don't know where, or why, or how I force this through, my metadata is correct except for a recent IP and network overhaul, so i just want it to update the path ideally, but was going to just leave it chugging away if possible.

     

    I should probably ask this within the Jellyfin community, as I know this is the support thread for the binhex-docker.  But its just in case.  And i spend more time around these parts currently. 

  7. I've just moved my appdata to its own cache drive (long overdue).  SWAG is the only thing I can't get to totally move, I guess its because the certs are loaded/protected?  

     

    I set the share to prefer the static cache drive, and disabled docker in settings and run the mover.  If i disable docker again and manually move these files will I break something*? (probably :D

     


    image.thumb.png.e42fec67a4009b5a2040868b3356d4a7.png

     

    *I should say i have read the readme and its pretty clear:

    Quote

    WARNING: DO NOT MOVE OR RENAME THESE FILES!
             Certbot expects these files to remain in this location in order
             to function properly!

    However does moving the files to the same place on the cache count as moving them?

  8. On 12/21/2020 at 10:35 AM, ChatNoir said:

    To prevent misunderstanding, it is best to make the deliberate effort to change the way we are thinking and talking about pools.

     

    Up to 6.8.3 there was only one pool and the named was fixed : cache. Even if the caching functionality was not use, and/or it had other use case.

     

    One of the thing that can help make it clear for you in you use case is to chose the names of the pools so it is clearly understandable for you. And maybe not use the name cache at all.

     

    One thing that can help understand is to go back the configuration and the possibilities.

    For each Share, you can selection a pool association (pool A, pool B, etc.) and the expected behavior (No / Yes / Prefer / Only).

    I do think that the Option names could be changed in order not to induce this confusion on the Share page.

     

    From your post, you want a pool for your static data (appdata, etc) and one for temporary files (torrents, etc.)

     

    My advice, create two pools:

    • the first named static (or whatever name makes the most sense to you) with your 1TB
    • the second named scratch (or whatever name makes the most sense to you) with your 500GB

    Then, on a share by share basis, decide what pool to associate and the way the pool shall work with this share. Some examples to help you:

    • appdata share set to static pool and Use cache pool set to Only (potentially to Prefer). This way it stays there.
    • torrent share set to scratch pool and Use cache pool set to Prefer (or Only). This way the downloads and temporary files stay on scratch unless this gets too large. When the download and sharing phase is done, the files can be moved to the appropriate Media share (below).
    • Media share set to scratch pool and Use cache pool set to Yes. This way, the files stay on the share until the Mover is scheduled.

     

    I hope this is clearer. :/

    If not, ask for details.

    If you need help to get from your actual setup to something like this and you are not sure how, do not hesitate to ask.

    This really helped my thnking! Thanks! :)

    • Like 1
  9. 5 hours ago, dlandon said:

    It's not a bug.  The reads are legit.  You've never noticed it before because UD didn't show reads and writes until 6.9 and UD now refreshes those periodically so you can see them change.

     

    I suspect that the preclear plugin is monitoring the disk.  Remove the preclear plugin and see if the reads stop.

    Sweet, sorry I missed the update changelog, my bad!

     

    Good point, I'll have a play with plugins, I just think it's best the drives spin down, so I'll see if I can track down what is causing it.

  10. I have two drives precleared that appear in unassigned devices to be incrementing reads every few seconds, i've never noticed this before, and just wondered what might be causing it, or if its a bug or if i should be even more worried 😂

     

    Logs for scripts and the device dont shown anything, and the drives themselves are obviously unmounted and unformated, so i dont really understand what even could be read....

    image.png

    alexserver-diagnostics-20201231-1550.zip

  11. I've just overhauled my unraid box, and putting everything back together "better" than it was cobbled to together before.  I just want a some validation i've (not) done something stupid, and also writing it out might help others or even just for rubber-ducking :D

    I'm not sure which thread in other forums to post this in because it crosses so many, so i've popped it here in general for now, appologies if there is a better place i've missed.

     

    Mostly using binhex docker containers unless specified, they've done me proud, decent updates etc...
     

    Current modest setup that i'll add to bit by bit:

    • rTorrentVPN, this is setup to connect via PIA and have privoxy enabled.
    • Sabnzbd
    • NZB Hydra2/linuxserver
    • Jackett
    • Sonarr & Radarr
    • jellyfin
    • Nextcloud/linuxserver
    • SWAG/linuxserver

     

    Let me get this straight:

    • Sonarr and Radarr set to use NZB Hydra as the only Indexer
    • I have NZB hydra setup with jackett
    • jackett is running through the privoxy from rtorrent
    • SWAG reverse proxy to sonarr, radarr and nextcloud, but more containers later (NZBhydr, ombi &organizr for instance)

     

     

    Questions:

    1. I have the non VPN varient of sab, should I switch?
    2. Nexcloud reverse proxy works, but now the local webui IP doesnt (certs dont pass/match), is this expected or have i missed something?
    3. Do i need to set Sonarr & Radarr to use privoxy from rtorrent (in general settings), or is that redundant since i've passed the index searches through NZBhydra, can it hurt to do that anyway?
    4. I feel like a lot of things are passing to each other, is this better separation of duties or overcomplicating? or is there better way the simpler method? Pros Cons?

     

    Anything else I might have forgotten or to pay special attention to?  I want to set it up properly and forget mostly to "it just works". :D

  12. So i actually got around to this.  I installed and loaded i2c-tools.  Installed smbus2 with pypi, and loaded the python script (changing importing smbus2 instead but its a drop in replacement).  The script throws no errors, gives no feedback, and now i need to work out how I need to tweak it, i'm pouring over the comments and a thread by the creator, but its mostly beyond my knowledge I think. :(

  13. The "ES Energy Saving" profile on this old x8 boards seem to leave a lot to be desired.  I have switched out all the fans with low RPM fans, and whilst the noise isn't bad, it just seems the fans either turn high or low with little inbetween.  Given the board has a lot of temperature sensors and fan sensors and PWM one would think it could do better.

     

    Anyway I have come across these:

    https://www.ikus-soft.com/en/blog/2017-10-01-supermicro-x8-controle-ventilateur/

    &

    https://gist.github.com/ikus060/26a33ce1e82092b4d2dbdf18c3610fde

    And this seems to suggest it will give greater control.  I'm just a bit wary that I don't want to break anything underlying in unraid in my quest for a sensible fan profile.

    If i follow these instructions, what happens the next time i upgrade unraid?

     

    Presumably I can use the nerdpack to launch the script on this page, after i've installed and enabled whats needed?

     

    Strange I can't find much else written about this on the unraid forums, seems like the x8, and an old supermicro chassis are perfect for a cheap unraid box.  At least it's a massive upgrade on my old dell t20, pretty happy with it so far!

     

  14. Read a lot of conjecture about using encrypted btrfs as a filesystem, or what to move on or off the cache.  Are there any standard rules to follow?

     

    I don't want to prematurely ruin the SSD, but also want to make sensible use of it.

    Currently i'm going to set it up encrypted BTRFS, and point my downloading dockers appdata there and move the folder/updating the docker data.

     

    Or should I move the whole vdisk location?  That would give added performace at the expense of extra drive wear?

     

    Anything else that makes sense on the cache drive for most benefit?

  15. On 2/4/2019 at 7:02 AM, jbartlett said:

    Good idea! I'll add it to my To Do list.

    I was looking for a way to export, so I can use the data gained and the positions of the drives in my array to try and debug issues.

    It doesnt look like anything like that has been implemented yet?  


    I'm running the latest, and I'll probably just old school copy and paste and screen shot, but it would be nice to have an export function!

  16. It's ok i've got it, 1 file in each of the problem series had been corrupted from a data rebuild and a lot of moving above recently, and it turns out the file/folder parser doesnt fail gracefully in these circumstances.  

     

    ssh in and rm the problem files, and we are back running fine! 

    • Like 1
  17. I think i'm having permission issues, and i'm not sure what else to do or check.  I've run Docker safe new permissions from the fix common problems plugin.

    I'm running latest container on offer, sonarr v2.0.0.5344

    The two series i've noticed are: 
    Sons of Anarchy & Dexter

     

    They both exist at \tv\show name\series\episode etc… on the disk but if i update series and scan disk on the series on sonarr, nothing is found, even though if i edit the series I can see its set as “/media/Sons of Anarchy/”

     

    /media is mapped to /tv

     

    All other series from what I can see work absolutely fine. I enabled debugging and ran the scan and this is the log: 

    log: https://hastebin.com/ucosetapah.xml

     

    I think this: 

    SameFileSpecification|No existing episode file, skipping


    suggests that it might be some sort of permissions issue and it cant read the files?

     

    Jellyfin in your docker image shows these two series just fine after a scan, so i think its something to do with sonarr and this docker container?  

    I've posted direct on the sonarr board too, see if they can help, but I cant pinpoint if its sonarr or a docker permission problem.

  18. I have a 'TV' share on its own two disks and I have set my share to fill-up and set a 5GB minimum free space.

     

    Naturally my TV shows incrementally grow larger one episode at a time, so theoretically there might come a time where there is no room left for an episode, or a whole season (netflix release).  However I have set it to not split the show folder over disks*.

     

    Does the new episode write fail due to lack of space, or (more hopefully) does the whole show folder move to the next drive and write everything there?

     

    *I think at least, I have it set to "automatically only split the top level directory as required."

×
×
  • Create New...