Part 5 of 5; New gTLD SSR-2: Exploratory Consumer Impact Analysis

Throughout this series of blog posts we’ve discussed a number of issues related to security, stability and resilience of the DNS ecosystem, particularly as we approach the rollout of new gTLDs. Additionally, we highlighted a number of issues that we believe are outstanding and need to be resolved before the safe introduction of new gTLDs can occur – and we tried to provide some context as to why, all the while continuously highlighting that nearly all of these unresolved recommendations came from parties in addition to Verisign over the last several years. We received a good bit of flack from a small number of folks asking why we’re making such a stink about this, and we’ve attempted to meter our tone while increasing our volume on these matters. Of course, we’re not alone in this, as a growing list of others have illustrated, e.g., SSAC SAC059’s Conclusion, published just a little over 90 days ago, illustrates this in part:

The SSAC believes that the community would benefit from further inquiry into lingering issues related to expansion of the root zone as a consequence of the new gTLD program. Specifically, the SSAC recommends those issues that previous public comment periods have suggested were inadequately explored as well as issues related to cross-functional interactions of the changes brought about by root zone growth should be examined. The SSAC believes the use of experts with experience outside of the fields on which the previous studies relied would provide useful additional perspective regarding stubbornly unresolved concerns about the longer-term management of the expanded root zone and related systems.

As discussed previously, the ICANN Board did resolve on May 18, 2013, to undertake a study on naming collisions and their potential impacts. At the most recent ICANN meeting in Durban, South Africa, Lyman Chapin, Jeff Moss, and a number of other folks presented some of the preliminary findings from the study during the SSR Panel Session (you can find the audio here and some of the slides used here). Based on the dialogue and initial findings presented there, and the recurring requests for interdisciplinary and cross-functional studies, we spun up a small overlay team here for a couple weeks to conduct an exploratory study of our own to assess the feasibility of such an endeavor.  Some of the discussion in the report, titled New gTLD Security, Stability, Resiliency Update: Exploratory Consumer Impact Analysis and published as a Verisign Labs Technical Report, includes:

Consider what might happen if overnight, some networked systems inside a healthcare provider in Japan began to suffer undiagnosed system failures. Would it be a concern if some installations of banking software in the islands of the Caribbean became non-responsive?  Perhaps pause would be warranted when embarking on a visit to a developing nation, and discovering that the hotels in the region have suffered outages of their reservation systems. What if a rash of major enterprises around the world began suffering from widespread networked system failures of their internal operations (payroll, benefits, VoIP systems, etc.)? What if voice communications for home users became impacted by disruptions? Are specially branded names actually less secure than they were under more innocuous naming schemes? All of these scenarios have a measurable dependence on the DNS, and our measurements suggest they might also have a measurable dependence on the lack of certain generic Top Level Domain (gTLD) strings being delegated from the DNS root zone.

….

To augment that work, in this study we evaluate the risks that could be transferred to Internet users by the introduction of as many as 1,000 new gTLDs (in the first year, alone). To evaluate the “risk,” we propose a novel set of measures that represent actual risks to end users, and illustrate their incidence by measuring operational threat vectors that could be used to orchestrate failures and attacks. We present our candidate quantification in the form of a Risk Matrix, and illustrate one possible way to interpret its results. What we found is that while some may claim that the relatively abrupt addition of more than 1,000 new gTLDs is not a concern, there are quantifiable signs that profound disruptions might occur if the current deployment trajectory is followed. This may be especially true if recommendations that have been made are not fully resolved. For example, we investigate issues that include Man in the Middle (MitM) attacks, internal Top Level Domain (iTLD) collisions with applied for gTLD strings, X.509 certificate ambiguities, and regional affinities that could result in collateral damage to unsuspecting regions. Indeed, our measurements suggest that there may be measurable dependencies for undelegated gTLD strings of .accountant in the U.S. Virgin Islands, .medical in Japan, .hotel in Rwanda, and .corp across many topologically distributed Autonomous Systems (ASes) in the Internet. We also find evidence that there may exist a dependency between a popular Small Office / Home Office (SOHO) router vendor’s SIP boxes and the applied-for gTLD string .box. What’s more, with the intention for some applied-for gTLD strings, such as .secure, to function as “‘secure neighborhoods’ on the Net” [39] , our risk matrix suggests that their semantic meaning opens them up to risk factors from current traffic that other, lower profile strings don’t start off with.

…..

Conclusion

In this study, we conduct one of the largest investigations of DNS root zone traffic to date, with DNS queries from up to 11 of the 13 root instances, dating back to 2006. In addition, we propose a novel methodology to gauge the risk posed by applied for new gTLD strings, and quantify it using measurements of DNS, the World Wide Web, X.509 certificates, regional preferences, and inter-query timing analysis.

What we found was that quantifying the risk that applied-for new gTLDs pose to Internet users goes beyond simply evaluating query rates for, as yet, non-delegated new gTLD strings. Indeed, we found several instances where automatic proxy protocols, X.509 internal names certificates, and regional traffic biases could leave large populations of Internet users vulnerable to DoS and MitM attacks, immediately upon the delegation of new gTLDs.

Our measurements and quantification of risks exist as just candidate approaches. While we feel there is quantifiable evidence of risk, there is clearly room for alternate methodologies and this effort will certainly benefit from community input and more comprehensive analysis. However, we believe that this study constitutes the first attempt to conduct an interdisciplinary (and consumer impact) analysis of the new gTLDs in the global DNS.

One of the tangible benefits of this study has been quantitative analysis that has qualified some of the implications of unresolved recommendations. In this work, we have presented evidence that suggests that these unresolved recommendations have potentially damaging implications to general Internet consumers, corporations, and public interest. Additionally, we believe that the new gTLD program could pose very real risks to both the set of entities that have been charged with effectuating new gTLD delegations, and the set of those responsible for giving due consideration to (and implementation of) recommendations provided by ICANN’s advisory committees and expert contributors, if those recommendations remain unresolved.

While in the 2005 National Academies [48] study the number of recommended delegations was on the order of tens per year, we are not advocating any particular number. We are, however, advocating that instrumentation be in place and recommendations be enacted to support the safe introduction of new gTLDs.

We believe that further study and express focus on implementation of recommendations already provided is critical in progressing the new gTLD program in a safe and secure manner, for all stakeholders. We believe that this work has demonstrated evidence that risks exist, to both the existing Internet user base, as well as to new gTLD applicants and services consumers. We believe recognition of this evidence and explicit consideration, planning, and appropriate resourcing for further study and resolution of outstanding recommendations is the most prudent and expeditious manner with which to move forward.

This study illustrates that statistically relevant signal does exist in the current dataset, and can be used as a first order pass to identify acuteness of impact, although a much more sustainable instrumentation capability is required. Furthermore, we believe this study makes it abundantly clear that passing judgment of risk based on aggregate query volume alone sorely misses critical things like regional affinities (e.g., .accountant and .love in the U.S. Virgin Islands, .tjx and .church in Haiti, etc.) and lacks any capability to consider how acutely consumers and enterprises may be impacted.

As you’ll see in the technical report, we don’t actually have any new recommendations (well, maybe one, on the periphery). We’d just really like to see the ones that have already been made enacted. We recommend you give the technical report a read, and we most certainly welcome your feedback. We intend to continue to lean into this, as we believe addressing these unresolved recommendations is paramount.

Oh, and one final thing. As conveyed in the report, and as noted in an open letter to ICANN and NTIA, the mechanical capability to delegate a vast quantity of new gTLDs exists, but we strongly believe using this facility could undermine the stability of the DNS ecosystem. That is, it’s important to not conflate our current ability to expedite delegations with the advisability of such action, as multiple organizations have issued specific advice around this distinction for quite some time.
Share:

Danny McPherson

Danny McPherson leads Verisign’s technology and security organizations. He is responsible for Verisign's corporate and production infrastructure, platforms, services, engineering and operations, as well as information and corporate security. He has actively participated in internet operations, research and standardization since the early 1990s, including serving on the Internet Architecture Board and chairing an array of Internet Engineering Task Force and... Read More →