Danny McPherson | Apr 15, 2014
Recent attacks targeting enterprise websites have created greater awareness around how critical DNS is for the reliability of Internet services and the potentially catastrophic impact of a DNS outage. The DNS, made up of a complex system of root and lower level name servers, translates user-friendly domain names to numerical IP addresses. With few exceptions, DNS lives in a grey area between IT and network operations. With the increasing occurrences of distributed denial of service (DDoS) attacks, advanced persistent threats (APTs) and exploitation of user errors through techniques such as typosquatting and phishing, enterprises can no longer take a passive role in managing their DNS Internet infrastructure.
Implications of DNS Outages
With an average daily DNS query load of 82 billion at Verisign during the fourth quarter – and a fourth quarter 2013 peak of 100 billion – it is vital that Internet services be operational continuously. Without a doubt, the cost and requirements of running critical Internet infrastructure at these performance levels are high. However, if DNS operations were significantly interrupted for an extended time period, potential devastating results to businesses on the Internet could include any of the following:
- Revenue losses
- Impact to cash flow
- Productivity losses
- Damage to reputation and goodwill
- Compliance and/or reporting penalties
- Penalties and loss of discounts
- Impact to customers and strategic partners
- Diminished competitive advantage
- Employee morale and employee confidence in IT
Blog Moderator | Apr 09, 2014
Today Verisign announced five million domain names were added to the Internet in the fourth quarter of 2013, bringing the total number of registered domain names to 271 million worldwide across all top-level domains (TLDs) as of Dec. 31, 2013, according to the latest Domain Name Industry Brief. The increase of five million domain names globally equates to a growth rate of 1.9 percent over the third quarter of 2013. Worldwide registrations have grown by 18.5 million, or 7.3 percent, year over year.
Is it likely that we will run out of domain names? No, the number of possible second-level domain names in any TLD is an extremely large number. Refer to page 4 of the report for further explanation.
The increase of five million domain names globally equates to a growth rate of 1.9 percent over the third quarter of 2013. Worldwide registrations have grown by 18.5 million, or 7.3 percent, year over year.
The .com and .net TLDs experienced aggregate growth in the fourth quarter of 2013, reaching a combined total of approximately 127.2 million domain names in the adjusted zone for .com and .net. This represents a 5 percent increase year over year. As of Dec. 31, 2013, the base of registered names in .com equaled 112 million names, while .net equaled 15.2 million names.
New .com and .net registrations totaled 8.2 million during the fourth quarter of 2013. In the fourth quarter of 2012, new .com and .net registrations totaled 8.0 million.
During the fourth quarter of 2013, Verisign's average daily Domain Name System (DNS) query load was 82 billion across all TLDs operated by Verisign, with a peak of 100 billion. Compared to the previous quarter, the daily average increased 0.9 percent and the peak decreased 5.5 percent. Year over year, the daily average increased 6.4 percent and the peak decreased 19.2 percent.
As the Internet continues to evolve, it is crucial for enterprises to have a powerful and resilient infrastructure that maintains 24/7 availability. “DNS Outages: The Challenges of Operating Critical Infrastructure,” provides a high-level overview of the implications of DNS outages and the importance of staying ahead of threats.
Verisign published the Domain Name Industry Brief to provide the Internet users throughout the world with statistical and analytical research and data on the domain name industry. For more information, download the latest Domain Name Industry Brief.
Sean Leach | Apr 09, 2014
Infamous heavyweight boxer Mike Tyson once said “everyone has a plan until they get punched in the face.” As any organization that has faced a cyber attack will tell you, it is a lot like getting punched in the face, and if you’re not ready, you might get knocked out.
You’ve likely read recent headlines of major retailers, financial institutions, and now even universities, being hit with data breaches. As some of them have learned the hard way, it’s not a question of if your organization will be attacked; it’s a question of when. That’s why cyber threat intelligence is essential to any organization, large or small.
Launching a cyber attack has never been easier and these types of attacks are increasing in frequency, size and sophistication, making them more difficult to mitigate. These attacks are becoming so pervasive and complex that the White House recently announced new cybersecurity policies to improve efforts to protect critical U.S. infrastructures against the growing cyber threat. President Obama even commented on the cyber threat issue saying, “[it] is one of the most serious economic and national security challenges we face as a nation.” It’s clear that network security hardware and software alone cannot fully address the issue. In order to properly defend against these threats, you need cyber security intelligence to provide actionable and relevant decision support to IT and business operations by enabling them to:
Sean Leach | Mar 27, 2014
We often hear from companies with cloud applications that ensuring the availability of critical web-based services and applications is a key requirement for enhancing user experience and engagement. After all, customers often leave company websites if they have to wait for them to load, which could result in lost revenue and brand value -- all because of something that could be easily avoided.
A number of companies today utilize hardware-based solutions to manage their complex, global website traffic. But the costly combination of the initial capital outlay, operational maintenance, future upgrades and skilled staffing requirements makes this an expensive and complex approach to effective traffic management. Worst of all, this approach also increases the risk of downtime due to Distributed Denial of Service (DDoS) attacks. Many companies that are already taking advantage of cloud-based global load balancing often complain of the limitations and restrictions they run into when trying to define complex rules for performance and availability settings.
To help organizations meet their complex global traffic management requirements while lowering the total cost of ownership, Verisign today launched the next generation of its global load balancing platform that leverages the cloud, Verisign Dynamic Traffic Management (DTM). Verisign Dynamic Traffic Management extends our industry leading cloud-based Managed DNS Service by providing true software defined availability and performance optimization.
Burt Kaliski | Mar 26, 2014
Presentations, papers and video recordings from the name collisions workshop held earlier this month in London are now available at the workshop web site, namecollisions.net.
The goal for the workshop, described in my “colloquium on collisions” post, was that researchers and practitioners would “speak together” to keep name spaces from “striking together.” The program committee put together an excellent set of talks toward this purpose, providing a strong, objective technical foundation for dialogue. I’m grateful to the committee, speakers, attendees and organizers for their contributions to a successful two-day event, which I am hopeful will have benefit toward the security and stability of Internet naming for many days to come.
Keynote speaker, and noted security industry commentator, Bruce Schneier (Co3 Systems ) set the tone for the two days with a discussion on how humans name things and the shortcomings of computers in doing the same. Names require context, he observed, and “computers are really bad at this” because “everything defaults to global.” Referring to the potential that new gTLDs could conflict with internal names in installed systems, he commented, “It would be great if we could go back 20 years and say ‘Don’t do that’,” but concluded that policymakers have to work with DNS the way it is today.
Bruce said he remains optimistic about long-term prospects as name collisions and other naming challenges are resolved: “I truly expect computers to adapt to us as humans,” to provide the same kind of trustworthy interactions that humans have developed in their communications with one another.