The International Telecommunications Union (ITU) pulled a fast rabbit out of the hat at the opening of its annual conference two Fridays ago by ratifying G.fast, the standard designed to deliver access speeds of up to 1 Gbit/s over existing copper wires.
G.fast works by extending the range of frequencies over which broadband signals travel. Typically, it is likely to be used when the fiber has been deployed as far as the distribution point close to the office or home — the so-called FTTdp. And the sweet spot most operators will target will probably be 500 Mbit/s performance over some 100 meters.
While the telecoms community had little doubt G.fast (more prosaically known as ITU G.9701) would get the green light, since major operators and equipment vendors, especially in the US and Europe, have been lab and field testing the technology's capabilities (and limitations) for over a year now, there are still numerous questions concerning the technology, including technical and operational issues, vendor-to-vendor interoperability, and just where G.fast deployment makes commercial sense.
The Broadband Forum (BBF), which has been tracking the ITU timescales, has already indicated the University of New Hampshire's world-leading InterOperability labs would be leading its planned G.fast certification program. Initial product testing is expected to commence in the first half of next year, to be followed soon with a series of interoperability tests. Certification is scheduled for next fall, a remarkably fast track for a communications technology first mooted in 2011.
Not surprisingly, chip suppliers such as Broadcom, Ikanos, Israeli startup Sckipio (founded to focus on this technology), and German group Lantiq (the latter two having joined forces to develop designs for residential gateways) have been very gung-ho about G.fast. All claim significant contributions toward the final spec that has been ratified.
At the system level, those making the biggest waves include Alcatel-Lucent, Adtran, and Chinese conglomerate Huawei.
While the equipment/operator trials are said to have been pretty successful, some operators have indicated they are not yet convinced just how many of their customers will need a product offering hundreds of megabits per customer.
One of strongest proponents of the technology, BT, which recently said it has achieved an aggregate downstream/upstream data rate of 720 Mbit/s in its first, experimental trials using existing copper plant at loop distances of about 20m, also indicated it has identified difficult, but not insurmountable operational issues. The British telco suggested G.fast will most likely need to support 20 times as many nodes as the existing VDSL technology it has already deployed.
An identified headache for vendors will be the number of deployment scenarios for G.fast — the BBF has described 23 to date, including deployment on poles, in manholes, in multi-dwelling unit basements, and apartment floors.
The reality is current technology, VDSL and VDSL2 using vectoring, has already boosted speeds to 100 Mbit/s in fiber-to-the-cabinet (FTTC) deployments. Still using vectoring, G.fast will take this to the next level in terms of speed, but the proposed architecture will force operators to move their fiber plant even closer to the customers' distribution point.
It also needs stressing that for many operators G.fast will need to coexist with VDSL or VDSL2. While this is taken into account in the physical layer standard just approved, and has been demonstrated to be technically feasible, it does reduce the bit rates by an estimated 20% at relatively short loop distances from the street cabinet, and if these are higher, data rates can reduce significantly.
So the likelihood is G.fast will clock in at less than the headline data rate, but still sufficient for super-fast broadband services. And, importantly, at rates able to compete against cable operators deploying DOCSIS 3.0 technology, if not the emerging DOCSIS 3.1 specification.
To read the rest of this article, visit EBN sister site EETimes.