BarbaGrump

Members
  • Posts

    26
  • Joined

  • Last visited

Posts posted by BarbaGrump

  1. On 7/5/2023 at 5:32 PM, independence said:

    Same here. I disconnected the LCD, but I was not able to disconnect the LEDs so that the green and blue LED (one of them) constantly flashes

    I've fixed that problem with black scotch-tape... 🙂

  2. On 5/25/2023 at 9:25 AM, Giorgio said:

    Hi Guys,

    I have an asustor lockerstor 6 gen2.

    I update the bios version to 1.21 and use it with proxmox and unraid in a VM.

    Previously I had unraid on a different system and I migrate the same USB key to this one without problem.

     

    I had only problem with HIGH TEMPERATURE of the CPU, about 70°C in idle condition and over 90°C at full. I already have changed the thermal paste because the standard one was a big messy of white paste, with the new one I lowered from 87 to 70°C the idle temperature, but still high.

    About this I read above that the FAN is not controlled (I also notice that). I think to replace the FAN supplying the new with 12V without throttling it. Do you think is a good idea?

     

    About the BIOS setting, how do you set the parameters, I didn't find information.

     

    In this condition UNRAID works well for a few days, but after that UNRAID stopped with a KERNEL PANIC error. I try to attach an image with the error. Any idea? It can depends to high cpu temperature?

     

     

    Interesting! Exactly this is my next adventure...to put proxmox on the AS6704T and move my unraid VM(and the rest of VMs) that runs on a DIY rig for the moment. Will get back with my findings.

     

    I did replace the Fan(se post #1) and that had effect on the temp. 

     

    You hammer F5 to get to a menu att boot where you use tab to choose "settings". From there its a ordinary bios.

     

    Kernel panic...yikes! Is it Unraid VM that panics or is it Proxmox? I'm running Unraid under Proxmox(SATA-controller and GPU passed through), and its rock solid. My reason to step away from my much more powerfull DIY-rig(core i5 11500T) back to this nice AS6704T, is due to severe problems with the nic...the dreaded V225-I that craches under load dispite me trying every solution on the Internet.

     

     

  3. On 5/22/2023 at 10:02 AM, Jacksaur_ said:

    Yet another old necro (Apologies!), but this seems like the best thread for talking about this device in particular.

    How's it going for you these days? Considering running the same myself, though with an AS6604T instead of an 670.

     

    The OP mentions that the LEDs on the front don't work: Does that include the LCD screen? Does it just not display anything at all?

    It just showin "Starting up..." or something like that. I've disconnectad the display, so I dont remember exactly.

  4. On 2/20/2023 at 10:59 AM, independence said:

    Thanks for sharing your experience.

    Some months ago I thought that I need an Win10or11 VM 24/7 but I started to use docker a bit more extensively and now I only use the Win VM some hours per week. I think that the n5105 is sufficient enough to run 10-20 containers 24/7 and the VM sometimes. I like the form factor of the Asustor and the 4x NVMe option that much that I will give it a try ;-)

    @independence

    Any updates on your adventures with the Asustor?

  5. 3 minutes ago, ich777 said:

    Then maybe think about changing the paths to the real file path since this will write directly to the disks and save some overhead and CPU cycles. ;)

     

    It's just a recommendation from my side, you should be also fine if you are using the FUSE file path.

     

    I will look into it...can never save to much cpu cycles -> lower power consumption 🙂

     

  6. 1 minute ago, ich777 said:

    I do that in all my templates (or better speaking in almost all) by default, you can set it to /mnt/user/... if you want too but I don't recommend that because you can run into issues (mostly on my game server containers where they would simply segfault or simply not work <- this is caused because of the FUSE file path) and you save some overhead because you are not running it through FUSE but that is really negligible in the case for Edge.

    If you have your appdata set to use cache "Only" or "Prefer" nothing should go wrong (which it should be in my opinion) but if you've set it to "Yes" you should change the path to /mnt/user/...

     

    Hope that explains it a bit.

    Thanx! I run all my apps on a share with the "only" option set for the cache 🙂

     

  7. 12 hours ago, ich777 said:

    Done.

    Really Awsome! Working perfect!

     

    One question...Is there a reason why the persistant storage (/ms-edge) by default maps to a disk rather than a share?

    i.e Data Dir: /mnt/cache_1tb/appdata/micrososft-edge/

    whereas the share instead would be /mnt/user/appdata/microsoft-edge

     

    I changed to the share, and everything seems to work.

     

    Once again...thank you for your effort!

  8. 13 minutes ago, ich777 said:

    Maybe, but I have to look into this since I have to somehow get the version and keep the container up to date and other things that are a bit tricky on Chromium based browsers.

    I understand the problem. Allthough its not the way containers is supposed to work... could one think of a 0.1 version where edge is kept updated from within the container(persistant storage)...the same way most nextcloud-container works? Or is that as complicated as the way you do this with the chromium?

  9. I run Unraid in a Proxmox VM, and haven't tried this it on a "normal" Unraid, but if I run it on my Proxmox host, it saves me a couple of Watts because if doesnt wake all cores if just one is needed if I'm understand it correctly

     

    Check

    cat /sys/devices/system/cpu/intel_pstate/status (you might have to change path somewhat)

    If it says "active", you can change it to "passive"

    echo "passive" > /sys/devices/system/cpu/intel_pstate/status

     

    If it works, add it to /boot/config/go

    If not, change back with 

    echo "active" > /sys/devices/system/cpu/intel_pstate/status

     

    *edit 

    source: https://askubuntu.com/questions/1380386/permanently-setting-intel-pstate-driver-to-passive

    • Like 1
  10. @independence

    Well, yes. It's working well if your user case is anything like mine above.

    But I've moved to a new environment with Unraid virtulized on a Proxmox host together with some other Linux, Windows and MacOS VMs, and this new environment was too much for the n5105 to handle, so I'm now on a DIY core i5-11400(same rig mentioned in first post) instead with a power average at 26w(been able to tune this down) instead of 19-20 for the Asustor.

  11. On 11/4/2020 at 7:30 PM, CS01-HS said:

    I saw a few of these hard resetting link errors during my mover run. Thankfully (?) no CRC errors reported. ata3 is a spinning disk attached to an integrated ASM1062 controller.

     

    I wonder if it might be related to the power-saving tweaks because nothing else changed. For now I've disabled them and will see if they reappear. Maybe coincidence but I'm posting in case others have the same issue.

    Nov  3 23:17:30 NAS move: move: file /mnt/cache/Download/movie_1.mp4
    Nov  3 23:17:33 NAS kernel: ata3.00: exception Emask 0x10 SAct 0x80 SErr 0x4050002 action 0x6 frozen
    Nov  3 23:17:33 NAS kernel: ata3.00: irq_stat 0x08000000, interface fatal error
    Nov  3 23:17:33 NAS kernel: ata3: SError: { RecovComm PHYRdyChg CommWake DevExch }
    Nov  3 23:17:33 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
    Nov  3 23:17:33 NAS kernel: ata3.00: cmd 61/00:38:58:44:51/04:00:2c:02:00/40 tag 7 ncq dma 524288 out
    Nov  3 23:17:33 NAS kernel:         res 40/00:30:58:40:51/00:00:2c:02:00/40 Emask 0x10 (ATA bus error)
    Nov  3 23:17:33 NAS kernel: ata3.00: status: { DRDY }
    Nov  3 23:17:33 NAS kernel: ata3: hard resetting link
    Nov  3 23:17:33 NAS move: move: file /mnt/cache/Download/movie_1.mp4
    Nov  3 23:17:33 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Nov  3 23:17:33 NAS kernel: ata3.00: supports DRM functions and may not be fully accessible
    Nov  3 23:17:33 NAS kernel: ata3.00: supports DRM functions and may not be fully accessible
    Nov  3 23:17:33 NAS kernel: ata3.00: configured for UDMA/133
    Nov  3 23:17:33 NAS kernel: ata3: EH complete
    Nov  3 23:17:35 NAS move: move: file /mnt/cache/Download/movie_2.mp4

     

    This is exactly what I've experienced...see this post :

    In short...my SATA-controller dont like some of the tunables coming from "powertop --auto-tune", but running all commands(except the SATA-one) suggested in 1st post works like a charm. 

  12. On 12/9/2022 at 3:34 PM, BarbaGrump said:

    After running this setup, problems mounting...

    I wanted to add a second SSD to the cache-pool to up the redundancy, but when I start the system up again, I get errors from the ata-system, and according to Internet, the errors is due to bad SATA cables, connectors or the like. This system has a 4xSATA backplane and such no cables. What to do? I have tried to move disks around in the chassi, but now its always one or two of the slots(not always the same)that produces errors, and it doesn't seems to make a difference how many disks or which type(SSD, HDD) I use. So this is not a stable system!

     

    Well, hell...did some more testing, and what do you know...as it seems, the problem above comes from the command "powertop --auto-tune" as recommended in the first post.

    The SATA system in this box doesn't like the tunable "Enable SATA link power management for hostX" being set to "good" in powertop, which happens when you run "powertop --auto-tune" So when I run each powertop command as described in 1st post, and leaving out the SATA-part...every thing works as normal. The box is now idling on 16W, so maybe the SATA-tunable counted for about 1W.  
     

    Edit:

    here is energy stats from Homeassistant for 24h…guess when this new box went into business 🤔🤔🤔?

     

    FA510E7B-C830-421E-8527-0A6EEBAC63E7.jpeg

  13. On 10/15/2022 at 9:06 PM, BarbaGrump said:

    So...been using UR for about a year on different HW, and LOVE the flexibility it gives me...before that a long time Synology user, but since Synology started to make it difficult for us poor users(for example using USB zigbee controller) and all the nagging because I didn't buy Synologys EXPENSIVE disks and RAM, I simply love UR!

     

    Up to this point I've been running UR on DIY intel hardware(i5, 16GB RAM Array: 2x12TB, Pool: 2x1TB Nvme, Cache: 1TB SSD, 30 containers running all of my family needs(home assistant, plex, sonarr, nextcloud, vaultwarden etc, etc) and 3 VMs (for fun) The server clocks in at an average of about 35-40W, which isnt bad - but I think I could do better 🙂

     

    So, bought my self a ASUSTORE LOCKERSTORE 4 GEN 2 (AS6704T) Celeron N5105, 4 Bay(Drives) 4 slots NvME on Amazon, and here's my experience so far.

     

     Well, It works 🙂!!

     

    1. Connect Display to HDMI, and keyboard to USB(Im using USB-hub and a wireless Logitech keyboard)

    1.5(optional) Open the case and disconnect front LCD-panel. The LCD will go black but two green diods on the front will flash. Most irritating...

    2. To get into the BIOS, press F5 after POST(small white square in upper left corner of screen, and then TAB to Settings page.

    3. Set Your Unraid USB to be no 1 in boot order

    4. Set other parameters in BIOS as you see fit - I Would love to hear from someone who knows which settings to set to optimize for power saving...I'm just guessing on my own 😞

    5. F10 and reboot - Viola! Unraid's running on a freaking nas!!!

    6. Install IT87 drivers through apps (modprob it87 > /boot/config/go to SEE fan speed. Fan runs at a constant 735-ish rpm.

    7. Install Powertop and config. See 

    8. I'm now at an average of 15-22W 🙂

     

    What doesn't work:

    - Fan Control

    - LEDs on front panel

     

     

     

    After running this setup, problems mounting...

    I wanted to add a second SSD to the cache-pool to up the redundancy, but when I start the system up again, I get errors from the ata-system, and according to Internet, the errors is due to bad SATA cables, connectors or the like. This system has a 4xSATA backplane and such no cables. What to do? I have tried to move disks around in the chassi, but now its always one or two of the slots(not always the same)that produces errors, and it doesn't seems to make a difference how many disks or which type(SSD, HDD) I use. So this is not a stable system!

     

    Instead I ordered a Topton n5105 mini-ITX motherboard to replace the "old" Core i5 system the asustore were to replace.

  14. On 10/16/2022 at 4:28 PM, Unpack5920 said:

     

    rmmod i915
    echo "options i915 enable_fbc=1 enable_guc=3" > /etc/modprobe.d/i915.conf
    modprobe i915

     

     

    Thanx a bunch for this...I've struggled with my Docker/Plex setup on my Unraid instance on a Asustor Lockerstor 4 Gen 2 AS6704T with a Celeron N5105. Your suggestion did it!

     

    And to make it permanent and survive reboots you could:

     

    echo "options i915 enable_fbc=1 enable_guc=3" > /boot/config/modprobe.d/i915.conf

     

     

     

    • Like 2
  15. I'm to getting this today in 6.11 after trying to add a isolated bridge:

    1. virsh net-define /tmp/isolated0.xml

    2. virsh net-start isolated0

    3. virsh net-autostart --network isolated0

     

    /tmp/isolated0.xml:

    <network>
      <name>isolated0</name>
      <bridge name="virbr99" stp="on" delay="0"/>
    </network>

     

    Then after restarting VM-service - root: '/mnt/user/system/libvirt.img' is in-use, cannot mount

     

    Stopping docker-service, starting vm-service, starting docker-service works.

     

    Also, destroy and undefine the created network doesn't help 

    Also2, the virsh net-autostart --network isolated0 does not seems to work...I still have to start the network manually, but thats another thread I guess...

     

    Edit:

    Rebooted and dockers, VMs started as expected. But UR reported an unclean shutdown and started parity-check...not sure iff its connected, but if the /mnt/user/system/libvirt.img is in use, umount maybe unsuccessfull?

     

    Edit2:

    To make a custom defined network autostart, just make a symlink in /etc/libvirt/networks/autostart pointing to ../<whatever your network is called>

  16. 23 hours ago, Lolight said:

    Yeah, but how about the hard drives' temperatures?

    2x12Tb Ironwolf in array mostly sleeping(media library) so I’ve not been watching the temp, but no warnings from UR of overheating (default temp warning settings)

  17. 2 hours ago, Lolight said:

    Did you install the drives?

    What are the running drives temps?

    Yes drivers installed, but no fan control. So instead I’ve swapped the 4-pin fan to a 3-pin fan, this fan then runs at max (1900+ rpm), so to reduce the speed (and noise) I,ve put in a resistor so now it’s running at 950 rpms.

     

    Cpu temp are idling 45-50C, and with my bios settings temp never exceeds 87C(4 cores 100% a full hour), and throttling starts somewhere over 78C, taking speed down to 2400-2500 MHz.

     

    And from what I’ve noticed, NvMEs never gets warmer than 65C

  18. So...been using UR for about a year on different HW, and LOVE the flexibility it gives me...before that a long time Synology user, but since Synology started to make it difficult for us poor users(for example using USB zigbee controller) and all the nagging because I didn't buy Synologys EXPENSIVE disks and RAM, I simply love UR!

     

    Up to this point I've been running UR on DIY intel hardware(i5, 16GB RAM Array: 2x12TB, Pool: 2x1TB Nvme, Cache: 1TB SSD, 30 containers running all of my family needs(home assistant, plex, sonarr, nextcloud, vaultwarden etc, etc) and 3 VMs (for fun) The server clocks in at an average of about 35-40W, which isnt bad - but I think I could do better 🙂

     

    So, bought my self a ASUSTORE LOCKERSTORE 4 GEN 2 (AS6704T) Celeron N5105, 4 Bay(Drives) 4 slots NvME on Amazon, and here's my experience so far.

     

     Well, It works 🙂!!

     

    1. Connect Display to HDMI, and keyboard to USB(Im using USB-hub and a wireless Logitech keyboard)

    1.5(optional) Open the case and disconnect front LCD-panel. The LCD will go black but two green diods on the front will flash. Most irritating...

    2. To get into the BIOS, press F5 after POST(small white square in upper left corner of screen, and then TAB to Settings page.

    3. Set Your Unraid USB to be no 1 in boot order

    4. Set other parameters in BIOS as you see fit - I Would love to hear from someone who knows which settings to set to optimize for power saving...I'm just guessing on my own 😞

    5. F10 and reboot - Viola! Unraid's running on a freaking nas!!!

    6. Install IT87 drivers through apps (modprob it87 > /boot/config/go to SEE fan speed. Fan runs at a constant 735-ish rpm.

    7. Install Powertop and config. See 

    7.5 DONT USE powertop --auto-tune! (see post below for details) add the following to /boot/config/go:

    • /etc/rc.d/rc.cpufreq powersave
    • for i in /sys/class/net/eth?; do dev=$(basename $i); [[ $(echo $(ethtool --show-eee $dev 2> /dev/null) | grep -c "Supported EEE link modes: 1") -eq 1 ]] && ethtool --set-eee $dev eee on; done
    • for i in /sys/class/net/eth?; do ethtool -s  $(basename $i) wol d; done
    • echo auto | tee /sys/bus/i2c/devices/i2c-*/device/power/control
    • echo auto | tee /sys/bus/usb/devices/*/power/control
    • echo auto | tee /sys/block/sd*/device/power/control
    • echo auto | tee /sys/bus/pci/devices/????:??:??.?/power/control
    • echo auto | tee /sys/bus/pci/devices/????:??:??.?/ata*/power/control

    8. I'm now at an average of 15-22W 🙂

     

    What doesn't work:

    - Fan Control

    - LEDs on front panel