To be fair, it all trickles down to home users eventually. We’re starting to see 10+gbps fiber in enthusiast home networks and internet connections. Small offices are widely adopting 100gbps fiber. It wasn’t that long ago that we were adopting 1 gigabit ethernet in home networks, and it won’t be long before we see widespread 800+ gigabit fiber.
Streaming video is definitely a big application where more bandwidth will come in handy, I think also transferring large AI models in the 100s of gigabytes may also become a large amount of traffic in the near future.
Yup, my city has historically had mediocre Internet, and now they’re rolling out fiber and advertising 10g/s at a relatively reasonable $200/month.
I’m probably not getting it anytime soon (I’m happy with my 50/20 service), but I know a few enthusiasts who will. I’ll see what the final pricing looks like and decide if it’s worth upgrading my infrastructure (only have wireless AC, so no point in going above 300mbps or so).
SFP+ still pretty much requires pcie cards or home-server style hardware to use, but it’s pretty accessible. And you can buy 10GbaseT adapters for backwards compatibility for $40.
Disaggregated compute might be able to leverage this in the data center. I could use this to get my server, gaming PC and home theater to share memory bandwidth on top of storage, heck maybe some direct memory access between distributed accelerators.
Disaggregated compute might be able to leverage this in the data center.
I don’t think people would fuck with amplifiers in a DC environment. Just using more fiber would be so much cheaper and easier to maintain. At least I haven’t heard of any current Datacenters even using conventional DWDM in the C-band.
At best Google was using Bidir Optics, which I suppose is a minimal form of wavelength division multiplexing.
I wonder what non-telco applications will use this
I wonder if something like a sport stadium has video requirements that would get close with HFR 8K video?
To be fair, it all trickles down to home users eventually. We’re starting to see 10+gbps fiber in enthusiast home networks and internet connections. Small offices are widely adopting 100gbps fiber. It wasn’t that long ago that we were adopting 1 gigabit ethernet in home networks, and it won’t be long before we see widespread 800+ gigabit fiber.
Streaming video is definitely a big application where more bandwidth will come in handy, I think also transferring large AI models in the 100s of gigabytes may also become a large amount of traffic in the near future.
Yup, my city has historically had mediocre Internet, and now they’re rolling out fiber and advertising 10g/s at a relatively reasonable $200/month.
I’m probably not getting it anytime soon (I’m happy with my 50/20 service), but I know a few enthusiasts who will. I’ll see what the final pricing looks like and decide if it’s worth upgrading my infrastructure (only have wireless AC, so no point in going above 300mbps or so).
Man. The tech is so pricey though. 10g switch’s are scary lol
Yeah. I honestly think 10GbaseT was a mistake, since it fragmented 10gbit and made it so expensive.
The sfp+ switches aren’t too bad, here’s an 8 port unmanaged for $150: https://www.amazon.com/MokerLink-Support-Bandwidth-Unmanaged-Ethernet/dp/B09W24RZDC/
SFP+ still pretty much requires pcie cards or home-server style hardware to use, but it’s pretty accessible. And you can buy 10GbaseT adapters for backwards compatibility for $40.
Some wifi routers are even starting to adopt SFP+, even if it’s ungodly expensive. https://www.amazon.se/TP-Link-Deco-BE85-2-pack-Tri-Band-router/dp/B0C5Y46J1W/
Disaggregated compute might be able to leverage this in the data center. I could use this to get my server, gaming PC and home theater to share memory bandwidth on top of storage, heck maybe some direct memory access between distributed accelerators.
Gotta eat those PCI lanes somehow
I don’t think people would fuck with amplifiers in a DC environment. Just using more fiber would be so much cheaper and easier to maintain. At least I haven’t heard of any current Datacenters even using conventional DWDM in the C-band.
At best Google was using Bidir Optics, which I suppose is a minimal form of wavelength division multiplexing.