Leaderboard

Popular Content

Showing content with the highest reputation on 08/26/22 in all areas

  1. On August 26th, 2005, the very first post announcing "a new network-attached media storage server product called Un-RAID" was made on AVS Forum. What started out as a side project for "beer money" 17 years ago has turned into a growing and thriving company, product, and community of hundreds of thousands of homelab enthusiasts. In honor of this milestone, here is the link to the original forum post by Unraid creator and CEO @limetech that started it all! From all of us at Lime Tech, thank you all for the support, collaboration and trust over all of these years. We wouldn't be here without YOU.
    13 points
  2. The Dynamix file manager has an easy way built-in to move content from one disk to another disk (provided there is enough space on the destination disk) 1. Open the file manager from the top-right icon 2. Select the Move operation from the source disk (e.g. disk1) 3. Select the target disk to move to (e.g. disk4) 4. Select PROCEED and all content of disk1 will be moved to disk4 (existing files will not be overwritten unless explicitly selected) 5. You can follow the progress in the opened window or close it (X in top-right) and let it run in the background.
    3 points
  3. Tell me about it, looking at my post history, this was my first Unraid server, from 2007: Asus A8N-SLI Deluxe Onboard Nforce controller with 4 x Seagate 500gb SATA Onboard Sil3114 with 1 x Seagate 500gb SATA + 1 x WD 250gb SATA PCI Sil3112 with 2 x WD 250gb SATA Another PCI Sil3112 with 2 x WD 250gb SATA lol, 10 disks for a little over 3TB of usable space.
    2 points
  4. Unraid was already on v4.7 when I first started 11 years ago. https://forums.unraid.net/topic/11466-planning-a-small-form-factor-build/ Joe L. was a frequent contributor on the Unraid forum then. This was before we even had plugins. He also did many addons, such as the original preclear script and unMenu with its package manager. I discovered he lived in my city when I discovered that AVS thread. Joe L. made many posts to that thread starting on page 4.
    2 points
  5. @ich777 ok, now it works
    2 points
  6. Sorry for the trouble with flash backup folks. Here is what is going on: Some systems are having issues with flash backup locally, which causes them to repeatedly connect to our backup servers in the cloud. Enough systems are doing this that it is overwhelming our backup servers causing problems for other systems trying to connect, which in turn makes them repeatedly try to connect, which just exacerbates the problem. We have already tripled the number of cloud servers but some people are still experiencing problems. We are working on a plugin update that will limit how often each server tries connecting to the backup servers in the cloud. Once enough systems update to that version of the plugin the working systems will be able to connect and start storing their backups again. Additionally, the new update will do a better job of showing certain error states, so we can tell at a glance if those errors are happening, which will make resolution easier. For some errors we'll still need to see your /var/log/gitflash file to diagnose. As we help people with those issues we look for ways to have the plugin display an appropriate message or even resolve the issue automatically. So in the short term what can you do? Make a manual backup of your flash drive by going to Main -> Boot Device -> Flash -> Flash Device Settings -> Flash Backup If your flash backup is not working and you are bothered by log messages or extra network traffic, go to Settings -> Management Access -> My Servers and Deactivate your flash backup for now. You can re-activate after the new plugin is released. If you suspect that your flash backup is one of the systems causing problems, you can "Reinitialize" it from that same screen. That basically wipes the backup and starts fresh. Install the new plugin ASAP when it is released (probably next week)
    2 points
  7. Tons of posts related to Windows 10 and SMB as the root cause of the inability to connect to unRaid that were fruitless so I'm recording this easy fix for my future self. If you cannot access your unRaid shares via DNS name ( \\tower ) and/or via ip address ( \\192.168.x.y ) then try this. These steps do NOT require you to enable SMB 1.0; which is insecure. Directions: Press the Windows key + R shortcut to open the Run command window. Type in gpedit.msc and press OK. Select Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and double click Enable insecure guest logons and set it to Enabled. Now attempt to access \\tower Related Errors: Windows cannot access \\tower Windows cannot access \\192.168.1.102 You can't access this shared folder because your organization's security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.
    1 point
  8. There have been many posts involving SAS to SATA breakout cables of the Forward vs Reverse types. Which type to use where is generally the question. To make it simple, if the host-controller side is a SAS connector (SFF-8470) and the target side is SATA drives then you must always use a SAS to SATA Forward Breakout Cable. If The Motherboard/host-Controller side are SATA connectors and the backplane is a SAS connector then you must always use a SAS to SATA Reverse Breakout Cable. For SATA to SATA you just use a "SATA" cable as there is only one type, although they do come in different lengths. For SAS to SAS connections there is also just a single cable type. The two breakout cables, forward and reverse, are not the same although they look outwardly to be the same, not withstanding the fact that some of these cables have the SATA portion of the cables at staggered lengths and some have them at fixed length. If you just want to follow the rule you can stop reading now. Why are there two different cable types for SAS to SATA connectivity? An objective of the SATA system design was that SATA cables would have identical connectors at each end, and that SATA devices would have identical connectors independent of whether they were disk drives or disk controllers. This helps to make interconections foolproof and reduces the cost of cables. If you ever look at a SATA to SATA cable they are identical and wired as a 1:1 cable. In a 1:1 cable pin 1 of end-A goes to pin 1 of end-B, pin 2 to pin 2, pin 3 to pin 3, etc. If you were to look at the SATA connector on a host-controller or motherboard and the SATA connector on a disk drive they look the same, and are physically the same, but each are wired differently. A SATA connector has 7 pins. Two of the pins make up the receive pair and two of the pins make up the transmit pair. The other three pins are all used for ground signals. If a 1:1 ("straight-through") SATA to SATA cable is to work, then the receive pair can not be the same pins on each of the two device connectors (Host vs disk)! If they were the same pins then we would need what is generally referred to as a "cross-over" cable. The "Absolute Rule" is that the transmit pins on one side must connect to the receive pins on the other side and vice versa. This is true for PC-PC RS232 connections, Ethernet Connections, SATA connections, and almost all types of serial connections which are duplex, i.e. separate receive and transmit cables. The SATA cable connector design puts the Host/controller side transmit pair on pins 2 & 3, and the Receive pair on pins 5 & 6. On the Disk drive the receive pair are pins 2 & 3, and the transmit pair are pins 5 & 6. As a point of reference pin 7 is the keyed pin. All SAS connectors have their pins structured the same way no matter if they are on a host controller card, or a SAS backplane. Since for each port of the four that make up the SFF-8470 connector the transmit pins and the receive pins are physically in the same location, and we must connect SAS transmit to SATA receive and SAS receive to SATA transmit (for each port); the cables must be different depending on whether the SATA connector is on a disk drive or a Motherboard/Host-Controller. A SAS to SAS cable must therefore be a "cross over" cable to connect the transmit pairs of a port to the receive pairs of the corresponding port on the other side. Hope that helps clear up the mystery.
    1 point
  9. Hey Unraiders, All of us at Lime Tech are very pleased to announce the hiring of @elibosley as a Staff Engineer. Eli has a diverse skill set and will be working on a variety of projects for us. Most notably, he'll be working on the My Servers team in the backend. Eli has been an avid Unraid user for years and he can’t wait to start building new features for the OS. In his free time he likes to drive his Veloster N in Autocross events, explore caves, and play video games like League of Legends. Here's Eli's full bio: Please help give Eli a warm Unraid welcome!
    1 point
  10. To be fair, most of the issues we see here on the forums isn't technically limetech's software. In your AMD example, the best limetech can do is pick the least buggy version of the drivers provided. When they update to the latest driver from third parties, who knows how it will play out. If you document the issue and it's solvable by rolling back the AMD code, that's what will happen. Hopefully all these sorts of issues get caught early in the rc cycle and get ironed out of the full release.
    1 point
  11. merci Effectivement il y avait un dossier vide j'ai supprimé et la case est apparue Encore merci
    1 point
  12. I think I found the same post from Xbmx now Kodi website pointed to it. I also started on 4.7 and I must admit I was very excited too.
    1 point
  13. My first Unraid server was with version 4.7 and 2x 2TB data drives + parity, giving a whopping 4TB storage. I thought I had endless storage
    1 point
  14. Neato then i will keep using it. If issue occurs with future drivers, i will down the diags as soon as it occurs.
    1 point
  15. It's been three hours which is double the longest up time I had yesterday and it's still running. Looks like this may have been the issue. I'll check back in later to confirm after a bit more time.
    1 point
  16. I started, also on 4.7, a couple of months before you (my post in your thread - ) Happy Birthday, LimeTech and Unraid. Long may it continue.
    1 point
  17. Thank you very much. Is good to know that the fix is on his way, I will wait for the new plugin version. Congrats for a very good system, best one available!
    1 point
  18. Got it—thank you for the help. I'll swap the cable and see how it shakes out.
    1 point
  19. Replaced libvert with my old backup, and all now working again. Thanks for your help
    1 point
  20. Happy birthday! Thank you Limetech for all your work over the years on Unraid.
    1 point
  21. The sweet spot for drives was 320gb back then how things have changed.
    1 point
  22. I already replied in your general support thread, this is a new one, looks like for some reason it's mounting device sda as disk1, you should wipe/disconnect that device, but first post the output of: btrfs fi show
    1 point
  23. I have currently DDR4 ECC EUDIMMS in my system and never noticed ECC kicking in in my logs. What is you opinion on next gen Ryzen with DDR5? Is the on-die ECC enough to be safe from bitrot and thus any DDR5 is fine home server or is the cost of full blown ECC worth it?
    1 point
  24. One of the problems with on-die ECC is that it won't report a problem if there's is one, while normal ECC does.
    1 point
  25. The settings noted are for the zfs datasets you wish to share out, so you'll configure them with 'zfs set'
    1 point
  26. @chchiyan Could you provide me various CLI screenshots and text outputs like this page They will be added to the project wiki as the i5-12500 CoreFreq support.
    1 point
  27. I think I discovered the problem and solution. I was using Robocopy on Windows 10 to copy my files over to the unRAID server. However Robocopy is known to cause the destination folder to become hidden as a result of it changing the folders attributes. Here is some info about this issue: https://www.mickputley.net/2013/10/robocopy-creates-folder-that-is-hidden.html Hope this helps someone else in the future.
    1 point
  28. Delete network-rules.cfg and it will be recreated, eth0 will be the one used for management.
    1 point
  29. Check this and also update to v6.10.3, if it keeps happening after that enable the syslog server and post that after a crash.
    1 point
  30. Log shows many NMI events before the crash, and before those events are some WSD strange errors, WSD is known to some times cause high CPU usage, so I would starta with disabling that: Settings -> SMB Settings
    1 point
  31. It does look like a connection problem, or bad power.
    1 point
  32. That's likely a typo, JMS are Jmicron's USB chips and there's no JMS585 that I can find.
    1 point
  33. That is definitely the wrong command in more than one way. The webUI knows the correct command. The correct command for the command line is documented, but not always understood. https://wiki.unraid.net/Manual/Storage_Management#Running_the_Test_using_the_command_line If you were trying to repair disk sde that was not assigned to the array, then you didn't specify the partition, so it would be sde1 instead of sde. Since you were trying to repair a disk that was in the array, then you must specify the md device or you will invalidate parity, so in this case it would be md6 instead of sde.
    1 point
  34. I would like to once again thank everyone for there suggestions. I ultimately ended up replacing far more then was probably needed but some of the stuff like the fans and coolers where things on my to do list any way. The ultimate culprit ended up being the cpu or the mother board. Ultimately I ended up replacing all the fans and coolers as well as the psu. I ended up with an Intel 12700k and an Asus PRIME Z690-P D4 main board. I also took the opportunity to install two 1tb m.2 nvme drives for cache and a 500gb for my Plex meta data. Thank you again for all the advice.
    1 point
  35. @ich777 Version 1.91.7 is fixing the Monitoring issue with Pcores only Alder Lake
    1 point
  36. Just a heads up - we'll be rolling out an update to mothership tomorrow morning Aug 26 starting around 4-5am Pacific time. We are expecting 30 mins of downtime or less, but if you see anything odd during that time that is probably the reason. Success in less than 5 minutes! No action is needed on your end, but if you are seeing any errors in the UPC running this in the web terminal may resolve them: unraid-api restart
    1 point
  37. Nice. Best Community Dev ever right here.
    1 point
  38. No problem. I haven't actually ever used the GUI options and my setup may be very slightly different. For example when I create an array, I set the mount point of the array simply into /mnt - it seems like you've put yours into /mnt/user/zpool. I am not sure if that's in a guide or what, but it doesn't seem like a sensible way of doing it as you may have permissions problems being that it's a user folder. Also, clearly you're also talking about SMB sharing. I found that the unraid smb sharing doesn't work with ZFS really, but luckily zfs has it's own SMB built in - so you can edit the smbextra file in /etc somewhere - sorry can't look it up exactly at the moment - I think it's like /etc/samba/smb.conf/smbextra.conf or something like that. Don't worry the smb format has examples and is super easy. Also for me, any zfs shares in unraid under /mnt I just set them to nobody.user and that sorts them out. having them as root definitely doesn't work. Just always bear in mind with linux, folders must always be set to 777, and the files can be whatever you need. Hope that sort of points you in the right direction.
    1 point
  39. Quick update on some tests I've done, in case it is of use to somebody else. It appears that the total GPU memory that GVT-g can address on my system is 3712MB. This gets split between low GM space (a.k.a. Aperture Size) and high GM space. With Aperture Size set to 1024MB, the system has the remaining 2688MB as high GM space. I tried successfully multiple combinations of VMs with different vGPU modes. Combo 1: - 1x GVTg_V5_1 - 1x GVTg_V5_4 Low GM size used: 512MB + 128MB + 128MB (host) = 768MB High GM size used: 2048MB + 512MB = 2560MB Combo 2: - 2x GVTg_V5_2 - 1x GVTg_V5_4 Low GM size used: 2x256MB + 128MB + 128MB (host) = 768MB High GM size used: 2x1024MB + 512MB = 2560MB I haven't tried more than 3 VMs in parallel but I'm expecting that 5x GVTg_V5_4 should also be possible. For my particular system, setting an Aperture Size larger than 1GB not only is useless but hurts because it leaves so little high GM space that I can't even run a single GVTg_V5_1 Thanks again to @ich777 and @alturismo for your help!
    1 point
  40. I tried another controller and it appears to be negotiating at x4! SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] Broadcom / LSI Serial Attached SCSI controller Type: Onboard Controller Current & Maximum Link Speed: 5GT/s width x4 (4 GB/s max throughput) I'm hoping the next parity check is much faster! Thanks again JorgeB!
    1 point
  41. For anyone stuck; tried basically all suggested solutions to no avail. Ended up just doing a CURL from the docker's console: You can grab the token over @ https://www.plex.tv/claim Unsure if removing the PlexOnlineName, PlexOnlineToken, PlexOnlineEmail and PlexOnlineHome key/value pairs from Preferences.xml is required beforehand, but I had those removed from a previous attempt so. After like 10 seconds a big 'ol wall of text will appear in your console and it should be claimed again.
    1 point
  42. OK everyone motivated me to get off my behind and really update this plugin. Update coming this week. It's a work in progress, but I have figured out a can get a json response from the cli tool and will use that to display some out put. This initial release will just be text, but I'll look into putting back in much better looking graphs and such in a future release.
    1 point
  43. @CyrIng keep up the good work! thank you so far! As I am now on recent RC4 and also made an update to uefi firmware today (I know, shouldnt have taken this together), my CPPC flag wasn't green by default anymore. Could be I havent set all my recent BIOS settings, will check with time. I added "corefreqk.HWP_Enable=1 corefreqk.HWP_EPP=1" as kernelparameters and do not need to enable it manually as from now. Will see how it behaves. @Pillendreher , vielleicht auch für Dich interessant.
    1 point
  44. Thanks Solverz, I appreciate the help. sdub, thanks again for the support and time spent on this container.
    1 point
  45. EDIT: Nerdtools is now available as a replacement, you might want to check that first: Some tools like iperf3 and perl are now included in the base unraid release, hence them not being present in there. If it doesn't have what you need request it in the thread, and in the meantime the manual install below is still available in the original text below: ---------------------- Nerdpack is deprecated in 6.11. For the record, since it was unfortunately only posted in a thread in the German section instead of here where people would typically come for support (translated): To replicate the functionality (unsupported): Go to https://slackware.pkgs.org/15.0/slackware-x86_64/ which lists packages for Slackware 15 Unraid is based on Search for the packages you want Download the txz files for them, and put them on the flash drive in /extra (/boot/extra on a running system), that will cause them to auto-install on boot (create the folder if there isn't one) To be able to use them without a reboot use unraid CLI to navigate where you put the packages and run installpkg <filename> Packages might have dependencies, that would typically be pointed out by an error when trying to run the programs they contain, if so download and install those as well. The site also has a section listing dependencies that might help, although I wouldn't just install them by default since some are already built into unraid so try to run first. EDIT: Other package sources: https://slackonly.com/pub/packages/15.0-x86_64/ https://slackware.pkgs.org/current/slackers/ Of course the nerdpack repo although packages may be outdated: https://github.com/dmacias72/unRAID-NerdPack
    1 point
  46. Hi i fixed it by changing the rights of the data folder in /appdata/prometheus chown -R 65534:65534 data Maybe that helps you guys also.
    1 point
  47. Don't see what the 2 have to do with each other. But, rm /boot/config/network.cfg rm /boot/config/network-rules.cfg powerdown -r Your server will be set back up as DHCP etc.
    1 point
  48. I have a https://www.startech.com/en-gb/cards-adapters/pex1to162 with IBM Connectx3 flashed to latest Mellanox Firmware. Mellanox Network Card: Temperature: 39 °C Info: FW Version: 2.42.5000 FW Release Date: 5.9.2017 Product Version: 02.42.50.00 Rom Info: type=PXE version=3.4.752 Device ID: 4099 Description: Node Port1 Port2 Sys image GUIDs: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff MACs: f452140eb5c0 f452140eb5c1 VSD: PSID: MT_1080120023 [ 37.909940] mlx4_core 0000:03:00.0: 7.876 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x1 link at 0000:00:1c.6 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) iperf3 I get the following. root@computenode:~# iperf3 -c 192.168.254.1 Connecting to host 192.168.254.1, port 5201 [ 5] local 192.168.254.2 port 35822 connected to 192.168.254.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 725 MBytes 6.09 Gbits/sec 0 345 KBytes [ 5] 1.00-2.00 sec 726 MBytes 6.09 Gbits/sec 0 342 KBytes [ 5] 2.00-3.00 sec 726 MBytes 6.09 Gbits/sec 0 348 KBytes [ 5] 3.00-4.00 sec 726 MBytes 6.09 Gbits/sec 0 351 KBytes [ 5] 4.00-5.00 sec 725 MBytes 6.08 Gbits/sec 0 348 KBytes [ 5] 5.00-6.00 sec 725 MBytes 6.08 Gbits/sec 0 348 KBytes [ 5] 6.00-7.00 sec 726 MBytes 6.09 Gbits/sec 0 348 KBytes [ 5] 7.00-8.00 sec 725 MBytes 6.08 Gbits/sec 0 359 KBytes [ 5] 8.00-9.00 sec 726 MBytes 6.09 Gbits/sec 0 359 KBytes [ 5] 9.00-10.00 sec 725 MBytes 6.08 Gbits/sec 0 382 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 7.09 GBytes 6.09 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 7.08 GBytes 6.09 Gbits/sec receiver iperf Done. root@computenode:~# And from the other end IBM 40Gb Card with 10Gb adapter. root@unraid:~# iperf3 -c 192.168.254.2 Connecting to host 192.168.254.2, port 5201 [ 5] local 192.168.254.1 port 52776 connected to 192.168.254.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 766 MBytes 6.43 Gbits/sec 0 226 KBytes [ 5] 1.00-2.00 sec 777 MBytes 6.52 Gbits/sec 0 223 KBytes [ 5] 2.00-3.00 sec 778 MBytes 6.53 Gbits/sec 0 223 KBytes [ 5] 3.00-4.00 sec 778 MBytes 6.53 Gbits/sec 0 223 KBytes [ 5] 4.00-5.00 sec 778 MBytes 6.53 Gbits/sec 0 223 KBytes [ 5] 5.00-6.00 sec 778 MBytes 6.53 Gbits/sec 0 223 KBytes [ 5] 6.00-7.00 sec 778 MBytes 6.53 Gbits/sec 0 223 KBytes [ 5] 7.00-8.00 sec 778 MBytes 6.53 Gbits/sec 0 223 KBytes [ 5] 8.00-9.00 sec 777 MBytes 6.52 Gbits/sec 0 223 KBytes [ 5] 9.00-10.00 sec 747 MBytes 6.26 Gbits/sec 0 5.66 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 7.55 GBytes 6.49 Gbits/sec 0 sender [ 5] 0.00-10.04 sec 7.55 GBytes 6.46 Gbits/sec receiver
    1 point