Maybe they should be expanding their physical network first. I waited seven years after they supposedly came to my hometown, and their coverage area barely moved. Most of that is absolutely the fault of AT&T and Comcast stonewalling pole installations but they have the money to put up their own damn poles made of gold after that 77 billion profit report.
Now I moved elsewhere after covid and of course the only two real options still suck uncontrollably with no hope of any other big mover creating actual competition.
i am also incredibly disappointed in their lack of achievement here. they have a metric shit-tonne of liquid cash, lawyers and tech out the butthole… but no… were back to ma’ bell still coagulating ala T2.
so much for being different
I suspect lawyers are stonewalling expansion for fear of making their monopoly cases worse
Google fiber has been supposed to be coming to the west side of Atlanta for like 10 plus years. Hasnt an expanded at all . Yet they still keep that message coming soon to your neighborhood up. And somehow where I am only one option available. Fucking shitty Comcast
There’s vaults labeled “GFBR” 200 yards from my house on the east side, and it’s still “coming soon.” Meanwhile, AT&T is out here digging every 2 years.
At&t offered my 5mbps lmao. Idk what they are digging for
Probably putting in VDSL to cash in on federal “high speed Internet” grants.
It’s so frustrating, I worked with a group that had their own community broadband council just to get broadband more wide spread in their county.
Those grants are ridiculous and on objection from another fed department about their grants creating a conflict or another coop claiming they are already offering can derail a whole application. Applications that are not easy or cheap to produce either.
Makes me sick to my stomach
IKR? The last time digsafe came out and marked, there were 3 separate AT&T lines twisting around each other like spaghetti, all going the same way and within 3 feet of each other. Like, you’ve already got conduit buried, just blow another fiber through it. Maybe some exec’s kid runs a horizontal drilling company.
Something something ISPs forcing municipalities to create service monopolies?
Yep, somethingsomethingsomething regulatory capture.
Dude I feel bad you’re relying on Google of all people to save you 😬
you should feel bad for everyone in the u.s. that have to suffer the government(s) that allow this bullshit to even be a problem.
You could really change US to North America here
I just want an internet provider that isn’t Spectrum or single-digit download speeds. Not having any real choice fucking sucks, especially since Spectrum is horrible.
Had AT&T fiber at my old place and god damn that shit went down one time for an hour the whole 3 and a half years I was there
Have you looked at mobile broadband from T-Mobile or Verizon? I haven’t tried either personally but I know if I were in a broadband desert or an oligopoly market like most Americans I would definitely give it a try and see how performance is. Prices weren’t great when released, maybe $50+/mo. for home internet, you can get $ 30-40/mo around here from fixed line providers CenturyLink, FiOS/ziply, or comcrap; feel like the mobile Carriers really missed an opportunity at not pricing it cheaper to add a ton of subs or at least get people to try.
I wouldn’t want to calculate what it’d cost to replace all my switches with 25G capable ones… then all the network cards… You’d have to have a really specific application to justify it.
Just cost me 1K to replace 3 NICs, 1 router, and 2 switches to freaking 2.5Gb.
I got one of the 2.5 x 8 + 10 switches StH reviewed for like $80, and x520 nics are $20. I’m happy with it for homelab stuff!
Nice! I bought some used 10g UniFi stuff (dream machine and switch) for $500 and a pair of 10g NICs and a SFP+ cable for $80 on eBay. All in CAD. Already had some UniFi WAPs.
Homelabbing has been such a fun hobby, if a little expensive at times.
10Gbps used enterprise equipment is pretty cheap on eBay. Biggest problem I’ve had is getting compatible SFP+ adapters for the NICs.
Flexoptix reprogrammable tranceivers are a godsend for that. We use them almost exclusively at work and so do quite a few of ours customers (Universities and other places of higher education). But it’s probably hard to justify the cost of a reprogrammer box for a household. You can buy their transceivers pre-programmed though.
FScom has something similar, but I can’t vouch for those, never tried. Their patch cables are fine though.
You won’t but I will
Switch: mikrotik CRS504-4XQ-IN ($799.99) Cabling: QSFP28 to 4 x 25G SFP28 DAC ($63.00 per cable) NICs: Intel XXV710 25GB ($349.0)
I don’t know how many machines you have so for two machine it’s cost you $1562.97 and maxing out the switch would cost you $6651.83 but do you really have sixteen machines that need or can even physically saturate a 25GB line?
I think it’s more reasonable to get something similar to ubiquiti’s USW-Pro-Aggregation and have three machines capable of the full speed and 28 machines capable of half rate speeds (at a much lower cost per machine)
What about a router?
Both switches mentioned are L3 switches meaning they are a routers too.
I have no idea how well a L3 switch would work on a residential WAN connection. But don’t L3 switches lack features like NAT, DHCP, DNS, Firewall, port forwarding, etc?
DHCP and DNS (and Firewall, but I guess you don’t have a 25 Gbit/s FW) are of course easily moved elsewhere, but what about the others?
Well this is getting into the weeds a bit but TLDR it depends on the L3 switch.
For the mikrotik switch I mentioned, it runs the same RouterOS v7 as their actual routers. Anything you can do on a single purpose router you can do on the switch albeit at a slower speed for applications as the CPU in the switch isn’t as good.
For the ubiquiti switch… I’m not actually sure as ubiquiti’s L3 implementation is not exactly ideal (bordering on broken depending on who you ask)
Thanks!
I have only played around with L3 switches in packet tracer and iirc they missed a bunch of router features, not sure though.
Either way, packet tracer uses pretty old IOS versions and Cisco is pretty annoying so it wouldn’t surprise me if they locked it down on purpose.
That’s the early adopter tax. Same as it ever was.
Buy a media converter and do 25G -> 40G and run a 40GbE home net. Retired 40Gb gear is ludicrously cheap.
Edit. Or just stick a two port 100GbE card in your router, use an adapter to step one port down to 25Gb and run 40Gb off the other to the rest of the network.
If you’re struggling to think of a use-case, consider the internet-based services that are commonplace now that weren’t created until infrastructure advanced to the point they were possible, if not “obvious” in retrospect.
- multimedia websites
- real-time gaming
- buffered audio – and later video – streaming
- real-time video calling (now even wirelessly, like Star Trek!)
- nearly every office worker suddenly working remotely at the same time
My personal hope is that abundant, bidirectional bandwidth and IPv6 adoption, along with cheap SBC appliances and free software like Nextcloud, will usher in an era where the average Joe can feel comfortable self-hosting their family’s digital content, knowing they can access it from anywhere in the world and that it’s safely backed up at each member’s home server.
Video calls were all over 1950s futurism articles. These things do get anticipated far ahead of time.
4K Blu-ray discs have a maximum bitrate of 128 Mbps. Most streaming services compress more heavily than that; they’re closer to 30 to 50 Mbps. A 1Gbps feed can easily handle several people streaming 4K video on the same connection provided there’s some quality of service guarantees.
If other tech were there, we could likely stream a fully immersive live VR environment to nearly holodeck-level realism on 1Gbps.
IPv6 is the real blocker. As you say, self-hosting is what could really bring bandwidth usage up. I think some kind of distributed system (something like BitTorrent) is more likely than files hosted on one specific server, at least for publicly available files.
Also going big bandwidth ahead of the requirement curve means most people won’t use it to its full extent for a while. It’s much easier to implement and maintain such network than one trying to catch up with need.
I doubt a home server centered around software like nextcloud would ever become commonplace. I think a more probable solution involves integrating new use cases with devices people already have, or at least familiar form factors. For example, streaming from your smart TV device (chromecast, Roku, Apple TV, the actual TV itself) instead of from the cloud, or file sync using one of these devices as an always-on server. But, in both of these cases, there is in inherit benefit from using a centralized cloud operator. What are the odds that you have already downloaded the episode to stream to your TV box, but not your phone if that was where you intended to watch it anyways? And for generic storage, cloud providers replicate that data for you in various locations to ensure higher redundancy and availability than what could be guaranteed simply from a home server or similar device. I presume new use cases will need to be more creative.
I was involved in one of these Google fiber roll outs several years ago, Google simply doesn’t know what the fuck they want or what they are doing as far as installing outside plant goes.
EDIT: To clarify, they simultaneously had no fucking clue what they were doing & also wanted to micromanage all of their contractors.
I couldn’t care less tbh. Gigabit is more than enough.
And we’re still stuck on IPv4. Going to IPv6 would do a lot more than 1Gbps connections would.
And what do you think it would do for you?
- Better routing performance
- No longer designing protocols that jump through hoops to deal with lack of direct addressing
- No longer designing protocols that jump through hoops to deal with lack of direct addressing
Fucking CGNAT…
Sorry to be the one to mention, but NAT is here to stay. Even if IPv6 has enough address space for everything to have a public address it’s still good security measure to have local area network that has a firewalled exit node. Especially considering how IoT has become popular and just how little people care about security of same devices.
No, stop this. NAT is not a security measure. It was not designed as one, and does not help security at all.
Why doesn’t it help security? Is everybody’s device supposed to be publicly accessible?
Because hiding addresses does very little. A gateway firewall does not need NAT to protect devices behind it.
In fact, NAT tends to make things more complicated, and complication is the enemy of security. It’s one extra thing that firewalls have to account for. Firewalls behind NAT also don’t know where traffic is originally coming from, meaning they have one less tool at their disposal. This gets even worse with CGNAT, which sometimes has multiple levels of NAT.
Security is a very common objection to getting rid of NAT, and it’s wrong.
I have 10 gig at home, and powerful enough networking hardware that can take advantage of it (Ubiquiti stuff)
Nothing can ever saturate the line. So it’s great for aggregate, but that’s it
It’s not often that I can saturate a 1Gbps line, unless you have a large household I don’t see much point in going over 1Gbps right now. Though I’m sure there are some exceptions.
That’s what I was gonna say: it’s not that i use sufficient bandwidth to really need 1gbps but the line is never even temporarily saturated. Just rock solid
Having a connection that’s not even close to saturated (or backbone for that matter) means lower latency in general. But it also means future proofing and timely issues resolution as you catch problems early on.
Future proofing an Internet line doesn’t make much sense to me. If a higher speed plan is available, I’d just upgrade my plan if the need arises, save money in the meantime.
Flip it around and look from the ISP’s point of view. Once fiber is connected to a house, there are few good reasons to use anything else. Whomever is the first to deploy it wins.
Now look at it from a monopoly ISP’s point of view. You’re providing 100Mbps service on some form of copper wire, and you’re quite comfortable leaving things like that. No reason to invest in new equipment beyond regular maintenance cycles. If some outside company tries to start deploying fiber, and if they start to make inroads, you’re going to have to (gasp) spend hundreds of millions on capital outlays to compete with them. Better to spend a few million making sure the city never allows them in.
That too. To ISP it pays off to future-proof to a degree. More to the point, it’s easier to aggregate high bandwidth users since no one will be using full connection speed all the time, it’s simply impossible. So with 100Gbps they can give 25Gbps service to a lot more people than 4. Closer to 40 or so. Good marketing, test and prepare for future at a decent investment now. It’s how things should be.
deleted by creator
Man, I’d love to sit on that. Growing up with 56k and living with 100Mb/s now is already a big difference, but it shows when I push and pull docker images or when family accesses the homeserver. 1Gb/s would be better, but probably I’ll somehow use up the bandwidth with a new toy. 10Gb would keep me busy for a long time. 20Gb would allow me try out ridiculous stuff I haven’t thought of yet.
Same, I got 10gbit because there was some competition early with fiber getting wider. Now my same provider has slower offers at lower prices but I don’t mind the extra bandwidth in the case I would need it and I have a grandfathered offer so pay the same as 1gbit.
Paying the same rate is certainly an instance where it makes since. Plus, you can show off to friends!
how long until google kill it?
I thought they already did, so this is unexpected.
Fiber infrastructure? More likely they’d sell it if they wanted out.
That’s what they’re counting as killing based on the killed by Google website
This is still a thing? I thought they crushed it like 10 years ago
No, they severely underestimated how hard it would be to overcome the telcos and their lobbying.
Would be more exciting and worth paying attention to if Google Fiber wasn’t basically living in an iron lung over at Alphabet these days since they halted major expansion.
You could start your own VPC data center with this lmao
Always wanted to be my own datacenter 😄
I’ll never understand how you guys in the US are fine with having bandwidth limits on your broadband connections. I’d be pissed. I even have unlimited on my phone. Like wth?
What makes you think people are fine with it? ISPs have monopolies over service areas and can do whatever the fuck they want. They have monopolies because of corporate lobbying. No amount of voting gets these corrupt fucks out of office bc votes literally do not matter and there’s only two parties, they’re both to the right of center, and they’re both bought and sold. Just to really make sure, we’re all taught from birth that the US is peak civilization and all other countries are backwater shitholes.
Where in the world do you not have bandwidth limits? If there were no bandwidth limits I could just DOS my entire ISP by downloading petabytes between two of my own computers.
I think you are mistaking bandwidth limits with data caps?
At some point all devices have a bandwidth limit. Even if you somehow had a 10gb/sec phone data connection (which is absolutely not possible) your phone device literally cannot transfer data that fast.
Why are people doubting this? This opens up massive possibilities for people, especially those who want to start businesses outside of city centers.
You could:
-
host your own home-servers and never be worried about bandwidth
-
get 8k streams and not stutter (a low-end 8k stream requirs 50Mb/s, a family of 4 would need minimum 200 Mb/s just for videos)
-
send 8k streams and not stutter
-
offload most of your data to a datacenter on the other side of the planet and not worry about access speeds
- boot into a browser or a minimal frontend with a low powered device and mount your home directory
-
offload computing to the cloud (no need for a gaming PC if you can just play them online)
The biggest thing would be 8k streams. 360 8k streams would be even crazier. 360 videos are filmed using 3-6 cameras depending on how much fish-eye you want. True 360 requires at least 6. If each is filmed at 1080p that’s ~6k total resolution, but since you’re only watching one section of the video at a time, you’re really seeing 1080p.
Those “8k 360 videos” up on youtube are a lie! They aren’t 6x8k, but most likely
8k / number of cameras
. True 360 8k video would be 6x8k cameras.A single 8k stream at minimum requires ~50Mb/s. Multiply that by 6 and you’re at 300Mb/s just for a single 360 8k stream. Family of 4 –> 1.2Gb/s just for everybody to watch that content - and that’s the minimum. If you have a higher bit rate and aren’t streaming a 30 fps, you can quite easily double or quadruple that. Family of 4 again means 5Gb/s if everybody’s watching that kind of content in parallel.
But this is just the beginning. Why stop at “video”. These kinds of transfer speeds upon you up to interactive technologies.
It would still not be enough to stream 8k without any compression whatsover to reach lowest latency.
8k = 7680 × 4320 = 33,177,600 pixels. Each pixel can have 3 values: Red Green Blue. Each take 256 (0-255) values, which is 1 byte, which means 3 bytes just for color.
3 * 33,177,600 = 99,532,800 bytes per frame
99,532,800 bytes / 1,024 = 97,200 kilobytes
97,200 kilobytes / 1024 = ~95 megabytesSo 95MB/frame. Let’s say you’re streaming your screen with no compression at 60Hz or about 60 fps (minimum). That’s 60*95MB/s = 5,695GB/s . Multiply that by 8 to get the bits and you’re at 45,562Gb/s which is way above 25Gb/s. Hell, you wouldn’t even be able to stream uncompressed 4k on that line. 2k would be possible though. I for one would like to see what an uncompressed 2k stream would look like. In the future, you could have your gaming PC at home hooked up to the internet, go anywhere with a 25Gb/s line, plop down a screen, connect it to the internet and control your computer at a distance with minimal lag as if you’re right at home.
In conclusion, 25Gb wouldn’t allow you to do whatever you like. You could do a lot, but there’s still room. We’re not at the end of the road yet.
Yeah, man. Thank God someone is finally thinking about the family of 4 simultaneously watching 8K 120Hz 360 degree streams.
Also,
-
bandwidth isn’t the same as latency. This would not let you remote control “with minimal latency,” it would be exactly the same as it is with say 20Mbps download.
-
lossless and visually lossless compression dramatically reduces the amount of bandwidth required to stream video. Nobody will ever stream uncompressed video, it makes no sense.
-
If you want to know what an uncompressed 2K stream looks like, look at a 2K monitor.
Again, just because it isn’t being done yet, doesn’t mean it won’t be. Every time technology progresses, we find new and interesting ways to fill the new space created by it.
Nobody will ever stream uncompressed video, it makes no sense
Nobody thought it would ever make sense stream games over the internet with Nvidia Go (or whatever it’s called), but it’s being done. Nobody thought it would make sense to turn a browser into a nearly full operating system, but that’s about done.
If you want to know what an uncompressed 2K stream looks like, look at a 2K monitor.
Genius, why didn’t I think of that. Thanks for pointing that out.
bandwidth isn’t the same as latency
Wow, I had no idea! I bet a 20Gb line won’t get under 1s of ping. There’s absolutely no way.
-
20 gig networking — even just a switch — is so expensive. 10 gig is already out of reach for 99% of the population, even network nerds. We’re just now in the past couple years seeing a standard of motherboards with 2.5gbps rj45. A lot of brand new nvme ssds can’t saturate 25gbps. There are just so many bottlenecks. I’m not saying I wish dearly those didn’t exist, but I know from my experience upgrading to 10 gig just how many there are.
https://store.ui.com/us/en/pro/category/all-switching/products/usw-pro-aggregation
Personally I am more excited for high speed networking for homelabs to come down in price. At this point in my life I don’t feel the need to access my network outside of my house at super high speeds. My 100mbps up is fine for when I’m out of the house, and 10gbps is more than I need when I’m home.
Wouldn’t they provide you with a 20Gb compatible router? I was curious and cat8 LAN cables support 40Gb/s. They are 3x as expensive as Cat7, but with I’m just a few meters away from the router, so about 10-15€ and that’s the cables done.
Ah… the PCI-e ethernet card is where it gets pricey 😮 250€ for 10Gb card.
Damn…
Although, I’d be future proof for sure. That kind of speed will probably be enough for 20 years or so.
FWIW 10 gig cards can be much cheaper than 250€ as long as you’re willing to use SFP+ (I got a used pair of cards with a 10m optical cable for $90 CAD) but 25gig is where it gets stupid.
Even if they do supply a capable router, you will probably want at least a switch since most ISP supplied routers only have a few ports. Plus, it’s not uncommon for an ISP router to deliver their advertised speed over only one port, even if the router has several. At the end of the day, though, if you’re paying for >gigabit you probably want to set up your own firewall with a fancy router so you can properly configure your network.
Crazy that gigabit Ethernet is 25 years old and still the de facto standard. IMO we should all be able to afford 100gig inside our homes, finding the bottleneck inside our machines, not between them. Alas, 10gig is for the enthusiasts, and anything above that is for the elites.
Indeed. I’m getting much less than 1/10th of my provisioned 10Gbps for being cheap like that. It’s still plenty fast, though.
10Gbps is great for feeding a building
At this point I just want affordable 2.5Gb gear
Totally. IMO 2.5gbps should be in every new switch and router without any extra price.
Gigabit came out in 1999. No other standard has moved so slow.
offload computing to the cloud (no need for a gaming PC if you can just play them online)
Unless you can live very close to one of the data centers doing the computing to minimize the number of hops, that just isn’t even remotely doable with modern networking equipment
Google tried it with stadia and gifs like this show why it doesn’t work for most people
There are people on the internet with about 2-3 ms of ping. I’m not a network engineer to tell you how that’s even possible, but I’ve seen it. I’m on 15ms to most game servers right now on a copper line.
Google Stadia failed for different reasons. Nvidia Go (or whatever it’s called) still exists. Just because I have a shitty copper line doesn’t mean fibre will be as shitty.
Am thinking that in somewhat near future network boot will become a lot more dominant than it use to be. Infrastructure speeds are becoming sufficient to do somewhat longer boot but at the cost of significantly simpler administration and issue troubleshooting.
-
I’m all about thinking ahead but this seems insane. Really struggling to think of a home use need this these speeds.
I run a relatively small server for family and friends and I haven’t moved to 2gig plan because even that seems like overkill.
No one needs these speeds unless you have home office and even then it’s a stretch. For residential buildings it might make sense, but USA doesn’t have those or at least not as many. However it’s far easier to iron out the kinks and issues with early adoption and aggregation is a breeze then.
I’m all for insane early adopters to iron out kinks I’d stuff like this. I’m sure we’ll need these speeds at some point but I can’t imagine the average people will in the 15-20 years.
I’d say this is more bandwidth then my entire road would need in total.
Ew it’s PON based
Why would you care that’s it’s passive (pon: passive optical network)? As I understand it the limitations of passive vs active wouldn’t have any impact on the end-user. It’s not something I know a lot about, though.
Because PONs are just fundamentally worse. Why would anyone turn fiber of all things into a shared medium. Just lay fibers from the dwelling up to the central office. It’s barely any costlier since the real expense is the digging, not the fiber. And it’s basically guaranteed to scale forever by simply replacing the optics on the ends. That kind of infrastructure can also be leased out to other providers on an individual dwelling granularity. With PONs competitors are forced into reselling bandwidth, at best, or the infrastructure can be monopolised fully.
As opposed to what? Active? That’s not necessary in local networks
As opposed to a normal fiber link to the switch in the central office. No oversubscription or shared media.
I don’t understand how it is shared media through a PON system? What is the name for this alternative I’d like to look into it.
In a typical PON (GPON, XG-PON, XGS-PON) you have a single fiber from the central office to the optical splitter in the street, from where up to 64 subscribers are connected one fiber each. The bit between central office and splitter is shared. The splitter is passive and just sends 1/64 of the light to each downstream port, in the other direction it combines all the downsteam light towards the upstream port.
The OLT in the central office sends on one wavelength (e.g. 1577 nm) and all subscriber ONTs send on one other common wavelenth (e.g. 1270 nm). In both directions a time division technique is applied. I believe in the downstream the individual time frames are encrypted with different keys in turn, such that only the specific destination ONT can read the content of their specific time frames. In the upstream the ONTs have to make sure only to send in their own slots, as otherwise the OLT would receive superimposed optical signals that couldn’t be read. You can probably see how this could go wrong if a neighbor had malfunctioning equipment.
The alternative doesn’t really have a set of standards like PON, as you can just use whatever optical transceivers you want for each customer individually. Though I guess that for operational reasons an ISP would still standardise the setup for all customers. For example the ISP whose services I subscribe to tells customers to use “Bidir LR, 10 km, TX1310, RX1490-1550 nm”, as 1G, 10G, or 25G, depending on which you order.
To distringuish such a setup from a PON setup I have seen it being called point-to-point (P2P).