Page 1 of 2 12 LastLast
Results 1 to 16 of 17

Thread: I wonder when we'll see SOHO >1Gbps Ethernet?

  1. #1
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    I wonder when we'll see SOHO >1Gbps Ethernet?

    So GigE has been commonplace in the SOHO environment for ages now, and for most of that it's been 'fast enough' for most uses, what with mechanical HDDs until recently being around or below 100MB/s, off-the-shelf NASes not having the CPU grunt to push much more than that, and the vast majority of broadband connection being orders of magnitude below that.

    But in the past few years, we've seen broadband speeds actually catching up to LAN speeds, single SSDs vastly exceeding its capacity and even mechanical HDDs being capable of much higher throughput, let alone striped arrays.

    And yet we're still on GigE with no immediate sign of that changing. I've seen the odd very expensive motherboard include 10G NICs e.g. this one, and even this switch from ASUS with a couple of 10G ports, so maybe the ball is now rolling on the matter, but I must admit I was expecting to see it take off a bit faster.

    Sure, needing 10G at home is still a bit of a niche as not everyone has or needs a fast NAS, but we saw 1G becoming cheap long before broadband connections were close to saturating 100M. It's not just cost either, 10G NICs and switches still seem to carry quite a power increase over GigE, with a >£1000 12 port switch being around 100W loaded (using the XS712T as an example).

    Even WiFi is catching up, with numerous advances in the recent years continuing to increase throughput for not-outrageous prices.

    I've seen sub-10G solutions mentioned e.g. 2.5G and 5G as possible stopgaps where the cost/footprint of 10G is impractical, which if available I'm sure a load of SOHO users would jump on.

    Does anyone know something I don't? Because outside of link aggregation or using a local interface like USB 3.1/Thunderbolt, NAS speeds are right up against a wall at the moment.

  2. #2
    Splash
    Guest

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    I've seen 10GbE switch prices drop fairly rapidly over the last 18-24 months, but they seem to be levelling out a bit now. Datacentres are now looking at the next step - logically it was going to be 100GbE (with 40GbE available now), but this has proved to be too expensive to justify.

    In all honesty I think the biggest thing slowing uptake is the change of media - 10Gbase-T has latency implications when compared to optical implementations.

    Finally there's the noise levels - most SOHO environments demand either silent or very quiet kit, and that's not what we're seeing in the market at present. 10Gbase-T needs to disperse a chunk more energy in the form of heat than a comparable 1000base-T switch, and that means fans.


    TL,DR - I'd love a 10Gb variant on my SG300 switches at home, but due to noise and cost it's not happening any time soon.

  3. #3
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    Yeah that's what I mean, power use is more than proportionally higher than GigE, so I wonder why that's the case, and how that power is spread over the system e.g. is it the switch SoC itself (which could presumably benefit from die shrinks), or the PHY side? I suppose with the energy-efficient extensions to Ethernet, power use might be significantly lower when idle and with shorter runs common in SOHO environments - that 100W isn't so bad if the switch can idle at a couple of watts and increase fairly linearly under load.

    WRT the media, I wonder if optical would help with the power side of things too (assuming a significant amount of power is on the PHY side)? Also I see you can pick up both 10G SFP NICs and 10G optical SFP modules fairly cheaply, so I wonder if base-T has a lot to do with the cost too?

    I thought the latency differences were fairly negligible on the PHY side? Or is there additional logic for copper?

  4. #4
    Splash
    Guest

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    I think you're pretty much spot on there - costs-wise if you're tied to copper the switching is a lot more expensive (I assume due to extra logic/power at the PHY layer), and that makes the switches more expensive.

    Latency-wise there's some extra encoding on 10Gbase-T

    Quote Originally Posted by https://en.wikipedia.org/wiki/10_Gigabit_Ethernet#10GBASE-T
    Due to additional encoding overhead, 10GBASE-T has a slightly higher latency in comparison to most other 10GBASE variants, in the range 2 to 4 microseconds compared to 1 to 12 microseconds on 1000BASE-T. As of 2010 10GBASE-T silicon is available from several manufacturers with claimed power dissipation of 3–4 W at structure widths of 40 nm, and with 28 nm in development, power will continue to decline.

  5. #5
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    Ah interesting. It's also interesting that 10G power is put at 3-4W (presumably per-port) - that's not terribly high at all provided there are low-power states. I wonder if the FinFET nodes will help with power in the near future, as presumably the switch SoCs have been stuck on 28nm planar like most other stuff for the past several years?

  6. #6
    Senior Member
    Join Date
    Mar 2005
    Posts
    4,935
    Thanks
    171
    Thanked
    384 times in 311 posts
    • badass's system
      • Motherboard:
      • ASUS P8Z77-m pro
      • CPU:
      • Core i5 3570K
      • Memory:
      • 32GB
      • Storage:
      • 1TB Samsung 850 EVO, 2TB WD Green
      • Graphics card(s):
      • Radeon RX 580
      • PSU:
      • Corsair HX520W
      • Case:
      • Silverstone SG02-F
      • Operating System:
      • Windows 10 X64
      • Monitor(s):
      • Del U2311, LG226WTQ
      • Internet:
      • 80/20 FTTC

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    10GbE isn't as expensive as a lot of people think. Certainly not home use cheap but it is approaching SOHO costs. What's still not cheap or quiet is 10 GbE to the desktop as the only realistic ways of getting 10 GbE to the desktop is optical fibre or with 10Gbase-T. As mentioned earlier in this thread, 10GBase-T has high power consumption (I regard the latency issue to be mostly moot for desktop 10GbE) and fibre is expensive (you need the NIC, switch and 2x SFP+ modules like these. so you are looking at £300+ per port in total.
    For racks however you can save on the SFP+ modules and use SFP+ direct attach cables. They are limited to 10 Meters but they are just hugh quality coax cables really.

    For SOHO, you can get a Switch with 1/2 SFP+ modules, get the SFP+ direct attach cables and connect your server/NAS to those ports and the clients can enjoy GbE but up to 10GbE aggregate. Or one client can use the second SFP+ module and get the full 10 GBe performance.
    "In a perfect world... spammers would get caught, go to jail, and share a cell with many men who have enlarged their penises, taken Viagra and are looking for a new relationship."

  7. #7
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    Yeah even for a couple of systems you need to connect at high speed you can just use the twinax between them without a switch - the SFP NICs are fairly cheap compared to the Base-T ones. As I said earlier even the optical SFP optical modules aren't outrageously expensive, just not mass-market cheap.

    Personally I'd settle for a 2.5/5 gig stopgap; it's just 1G is starting to become a more obvious bottleneck when moving large files over a network when you're used to >500MB/s local transfer speeds with SSDs. Like I say it seems like IO speeds in general have seen massive improvements over the past few years e.g. SATA1>2>3>PCIe, USB2>>>>3>3.1, 802.11g>n>ac, 3G>4G, ADSL>ADSL2>VDSL, etc. But throughout all of that, Ethernet has remained static for desktop. Granted, it was miles ahead of most of those to start with.

    It seems a bit like the situation we had with USB 2 for the longest time - HDDs were much faster but we were stuck with painful transfer speeds to USB hard drives until USB 3 finally came along to save the day.

  8. #8
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    I don't think SOHO needs it though.

  9. #9
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    Unless you have a NAS then most domestic LANs probably have little use for it, not least because of current broadband speeds. But when you do have a NAS, gigabit is often a major bottleneck as you can easily saturate it with a single modern HDD, and that's before you factor in striped RAID arrays and SSDs.

    So it's arguably not critical yet, but everything around it continues to get faster and it would be a nice option to have, and as storage gets faster and faster with the SATA>PCIe transition, it's only going to look like more of a bottleneck going forward.

  10. #10
    don't stock motherhoods
    Join Date
    Jun 2005
    Posts
    1,298
    Thanks
    809
    Thanked
    125 times in 108 posts
    • Millennium's system
      • Motherboard:
      • MSI X470 Gaming Plus
      • CPU:
      • AMD 3600x @ 3.85 with Turbo
      • Memory:
      • 4*G-Skill Samsung B 3200 14T 1T
      • Storage:
      • WD850 and OEM961 1TB, 1.5TB SSD SATA, 4TB Storage, Ext.
      • Graphics card(s):
      • 3070 FE HHR NVidia (Mining Over)
      • PSU:
      • ToughPouwer 1kw (thinking of an upgrade to 600w)
      • Case:
      • Fractal Design Define S
      • Operating System:
      • Windows 101 Home 64bit
      • Monitor(s):
      • HiSense 55" TV 4k 8bit BT709 18:10
      • Internet:
      • Vodafone 12 / month, high contentions weekends 2, phone backup.

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    I give it 24 months or so, networks don't move very quickly unlike say GPU technology. Also as has been discussed the power requirements seem to need to come down a fair bit.

    It would be great to see things progress faster however on the other side of the coin, I'm sure there are many deployments even today @ 100mbps.
    hexus trust : n(baby):n(lover):n(sky)|>P(Name)>>nopes

    Be Careful on the Internet! I ran and tackled a drive by mining attack today. It's not designed to do anything than provide fake texts (say!)

  11. #11
    Splash
    Guest

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    Quote Originally Posted by Millennium View Post
    I give it 24 months or so, networks don't move very quickly unlike say GPU technology. Also as has been discussed the power requirements seem to need to come down a fair bit.

    It would be great to see things progress faster however on the other side of the coin, I'm sure there are many deployments even today @ 100mbps.
    As I said earlier - even if price *does* drop to the levels people are prepared to pay I think one of the major issues is noise: there would be a lot of power optimisation to do before those 10Gb switches are anywhere near quiet enough for a SOHO environment, and I think that's likely to be just as big a hurdle.

    Kit in a rack in a datacentre? Don't care about the volume, but how many of us are likely to find ourselves sticking a noisy switch behind the TV? I'd rather just have a slower network. I looked into Infiniband as a way to get cheap (or relatively cheap) connectivity for the homelab, but the switching is just too noisy.

  12. #12
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    There seems to be quite a push towards 25GBit atm, perhaps either 14 or 10nm silicon will make it cheap and low power enough for high end home use.

  13. #13
    Splash
    Guest

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    Quote Originally Posted by DanceswithUnix View Post
    There seems to be quite a push towards 25GBit atm, perhaps either 14 or 10nm silicon will make it cheap and low power enough for high end home use.
    Cisco's 100Gbit switches are actually made up of 4 x 25Gbit chips per port - this allows them to do 25 (and 50Gbit, if what I'm told is to be believed) at a more competitive price than the 40Gbit offerings.


    Hilariously the next speed of fiber channel switches is 128Gbit(!), having only recently launch 32Gbit. Flash has some serious go faster stripes.

  14. #14
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    I've never really understood why there's no standardised way of truly bonding Ethernet links to make a single faster link. By that I mean being able to send e.g. a single TCP stream at 2Gbit over two gigabit links. It's not like it's technically hard to do - gigabit Ethernet itself utilises the combined bandwidth of multiple pairs simultaneously, same with MIMO WiFi, DOCSIS channel bonding, and anything using e.g. OFDM. Even some DSL providers support that sort of bonding lines, along with copper Ethernet in the first mile lines.

    I mean I can understand not supporting it over unmanaged switches, but I don't get why two links can't be combined below layer 2 so they share a MAC address with the switch/NICs handling the lower level stuff?

    Yeah I'm aware of balance-rr but it operates at a higher level and can mess with packet ordering which can cause issues with TCP, and AFAIK is only really a *nix thing, not really found on switches etc.

    Or am I wrong?
    Last edited by watercooled; 21-03-2016 at 08:53 PM.

  15. #15
    Registered+
    Join Date
    Mar 2016
    Posts
    18
    Thanks
    0
    Thanked
    1 time in 1 post

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    Even if your broadband was faster than your gig ethernet, that would only be a factor if one user was using that much bandwidth. If its shared out, gig ethernet gives each user a gigabit.

  16. #16
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: I wonder when we'll see SOHO >1Gbps Ethernet?

    Quote Originally Posted by Splash View Post
    Cisco's 100Gbit switches are actually made up of 4 x 25Gbit chips per port - this allows them to do 25 (and 50Gbit, if what I'm told is to be believed) at a more competitive price than the 40Gbit offerings.


    Hilariously the next speed of fiber channel switches is 128Gbit(!), having only recently launch 32Gbit. Flash has some serious go faster stripes.
    Yeah, that is pretty much how it was explained to me. If you can have a pair of 25Gb interfaces for the price of a single 40Gb, you get a free 10Gb extra throughput.

    It should also scale better onto a 100Gb backbone, which I doubt we will see in SoHo for a while yet

    Quote Originally Posted by watercooled View Post
    I've never really understood why there's no standardised way of truly bonding Ethernet links to make a single faster link. By that I mean being able to send e.g. a single TCP stream at 2Gbit over two gigabit links. It's not like it's technically hard to do - gigabit Ethernet itself utilises the combined bandwidth of multiple pairs simultaneously, same with MIMO WiFi, DOCSIS channel bonding, and anything using e.g. OFDM. Even some DSL providers support that sort of bonding lines, along with copper Ethernet in the first mile lines.

    I mean I can understand not supporting it over unmanaged switches, but I don't get why two links can't be combined below layer 2 so they share a MAC address with the switch/NICs handling the lower level stuff?

    Yeah I'm aware of balance-rr but it operates at a higher level and can mess with packet ordering which can cause issues with TCP, and AFAIK is only really a *nix thing, not really found on switches etc.

    Or am I wrong?
    I suspect it wouldn't be as useful as it sounds. If only one machine is sending bonded packets and the destination is on a single interface, then the switch would have to rate adapt 2Gb to 1Gb which means storing the data rather than just sending it on, which means it might run low on buffer space and have to tell the sending PC to slow down while it deals with what it already has. So that means you would need all of your devices on 2Gbit, which is a lot of extra cabling and expense for an occasional doubling at best in performance.

    For most people they only want better throughput on a server, so the current system works well. Clients get a 1Gb channel, but with multiple interfaces on an aggregate you can talk to multiple clients at the same time, and the interfaces acting somewhat independently means you get some fault tolerance.

    If you are desperate for extra speed, then 10Gb is probably worth the expense if only on a single point to point link with a card in each machine and no switch to get the cost down.

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •