Author: Gabriel Dusil

• Gabriel is a seasoned sales and marketing expert with over twenty years of experience in senior level positions at companies such as Motorola, VeriSign (part of Symantec), and SecureWorks (part of Dell). His strengths lie in international business development and strategic partnerships, as well as the unique ability to translate complex ideas and technologies into language that decision makers can easily understand. Gabriel has a Bachelor’s degree in Engineering Physics from the University of McMaster in Canada and possesses expert knowledge in cloud computing, IT security and video streaming technologies (Over the Top Content, OTT). Gabriel also runs his own company, Euro Tech Startups, and manages two blogs: https://mykoddi.com/dusilcom/ and https://gabrieldusil.com/

Cognitive Security • Finance & Banking Security

Portfolio - Cognitive Security, Finance & Banking Security (title)

Synopsis

Bank managers face complex challenges in balancing security spending against evolving internet commerce risks.  Criminals have managed to change the battlefield in the war on cyber-crime under the noses of the enterprise community. Highly intelligent exploit kits and trojans bypass layers of security with ease. To prepare for these new adversaries, new and advanced levels of protection are needed to facilitate current and future security objectives. Expert Security addresses the need to implement a more robust and cost effective level of expertise, and bridge the gap to Managed Security Services which are based on a cloud-based model.  It’s no longer about adding never-ending layers of protection that fits within a security budget – it’s ensuring that the layers that exist are clever enough to mitigate against modern attacks. This is paramount in ensure asset protection. Network Behavior Analysis is a new building block in Expert Security, and offers a viable solution for state-of-the-art cyber-attacks.  This presentation outlines some of these threats and how companies are protecting their clients from modern and sophisticated attacks.

Download the Original Presentation here:

Portfolio – Cognitive Security, Finance & Banking Security (’12).pptx

Or view the PDF version on Slideshare:

[slideshare id=17315500&w=476&h=400&sc=no]

Cognitive Security – Introducing Cognitive Analyst

Portfolio - Cognitive Security, Cognitive Analyst (title)

Introduction

Cognitive Security offers a suite of solutions focused on protecting clients against network attacks. This is achieved through a range of products and services called Cognitive Analyst, using Network Behavior Analysis (NBA) and Anomaly Detection (AD). Our goal is to close the vulnerability gap of today’s security limitations that only concentrate on identifying known threats. NBA can detect modern threats such as zero day attacks, advanced persistent threats, and polymorphic malware – all of which are not traditionally detected by signature based solutions. Modern attack vectors require a higher level of security expertise to provide the necessary information that mitigates threats before damages is caused. Cognitive Analyst is not a replacement solution to existing security devices. Instead, it complements existing protection strategies by enhancing the intelligence necessary for detecting future hackers. Our platform serves to discover unique threats that breach a company’s perimeter. Artificial intelligence and gaming theory methods are used to “catch criminals in the act”.

Figure i. Introducing the Cognitive Analyst Platform

Figure i. Introducing the Cognitive Analyst Platform

Cognitive1 is ideal for enterprise clients looking for a robust and cost effective network behavior analysis platform. This includes any finance sector client who needs to protect their environment from fraud. Cognitive1 is also suitable for vertical markets that need to protect their sensitive data from intellectual property (IP) theft.  The design goal of Cognitive1 is to ensure that clients have an easy to deploy platform that self-configures and auto-tunes to any network environment. The features of Cognitive1 include:

  • Software Support & Maintenance – The Cognitive Security support team is proactive in helping clients receive the maximum benefit from Cognitive Analyst.
  • Regular Software Updates – Cognitive Security provides clients with software development expertise, with updates issued quarterly. Cognitive Analyst updates maintain a much higher level of protection from threats – and over a longer timeframe – compared to signature based solutions like antivirus or firewalls. Our mission in is to stay ahead of threats, whereby signature based solutions continually play catchup with attackers.
  • Cognitive1 is designed for Networks up to 2.5 Gbps

Cognitive10 targets clients with high speed networks and traffic volumes. This includes telecom providers, mobile operators, network service providers (NSPs or ISPs), or data hosting providers. Cognitive10 is specifically designed to handle high-throughput networks, while maintaining accuracy in detecting modern sophisticated attacks.  The features of Cognitive10 include those of Cognitive1 and expand with these additional capabilities:

  • Cognitive10 is designed for networks up to 10 Gbps – This ensures compatibility with clients that have a higher demand in data throughput.
  • Adaptive sampling for high-speed networks – This feature ensures that data classification remains accurate as higher volumes of data are processed and analyzed.
  • Adaptive sensitivity – Cognitive10 has the ability to adapt to the severity of data classification over a longer period of time, based on the type and volume of data flow. This provide administrators with higher accuracy in threat classification across their client base.

CognitiveExpert focuses on clients who have requirements for high resolution and data analysis sensitivity.  CognitiveExpert is offered to a select group of clients who require bespoke security, such as government organizations, non-governmental organizations (NGO), and critical infrastructure providers. These clients may have smaller data throughput requirements, but need a highly accurate platform for detecting attacks.  CognitiveExpert requires a mandatory hardware probe and user-based Deep Packet Inspection (DPI) module for data provisioning. Capabilities of CognitiveExpert include:

  • User Data analysis – Using deep packet inspection (DPI), user data greatly enhances accuracy in threat detection and in understanding the attacker’s methods.
  • MAC address analysis – By analyzing details at OSI layer 2, further levels of granularity can be analyzed in specific attack vectors that utilize these methods to penetrate a company’s network.
  • Dedicated Account Management – As an option for CognitiveExpert clients, Cognitive Security offers expertise and analyst resources to help clients understand the severity of events at a granular level.

Experts in Network Behavior Analysis

Cognitive Analyst is a Network Behavior Analysis platform designed to identify a wide range of threats inside a client’s network. It differentiates itself through a low rate of false positives, a low overhead installation, and a unique self-adaptating feature that monitors and improves the system’s accuracy over time. Our competitive advantage is based on the use of advanced artificial intelligence and specialized anomaly detection technologies that ensures highly accurate detection. The intelligence collected by Cognitive Analyst results in an easy-to-integrate reliable system with long-term stability.

The system processes standard NetFlow v5/9/IPFIX data, available from a wide range of network devices. NetFlow does not contain the contents (ie. payload) of a communication, therefore data privacy is maintained.  Simplicity and efficiency enables the analysis of high-speed links. NetFlow and IPFIX is provided by widely available Cisco, HP, and Juniper switches, routers, and dedicated hardware or software probes.

Cognitive Technology

Figure ii. Cognitive Architecture

Figure ii. Cognitive Analyst Architecture

Data received by Cognitive Analyst establishes a baseline for network behavior, then analyzes any deviations that may occur against this baseline. Deviations may be potential security breaches. These anomalies are then processed in a self-organized, multi-stage process that is designed to reduce the number of false alarms, and retain a high level of threat sensitivity. Incidents discovered are categorized into a number of broad classes and is checked against established threat models and security policies. Any findings are reported to the user through a flexible array of data formats (such as via WEB, email, syslog, or to a Security Information and Event Management platform. Cognitive Analyst supports standard IDMEF reporting and a variety of other formats).

The initial stage of processing is with NetFlow or IPFIX data[1].  This information goes through a “Core Processing” module, consisting of algorithms and agents. Individually these algorithms create a trustfulness score from the data, and output their results to ‘Knowledge Fusion’ to determine which score is the most accurate.

Then the “Self-Monitoring” module introduces various synthetic attacks back into the system to enhance the resiliency of the algorithms. The algorithms are not aware that this is is simulated data. This is analogous to airport security staff that need to detect false representations of guns or knives that are periodically displayed on monitors, to ensure the officer is paying attention.

The game-theoretical model provides an aspect of randomness to the system. It ensures that the attacker can never predict the behavior of Cognitive Analyst. The key to high performance is in the combined strengths of the individual detection algorithms, while at the same time eliminating any inherent weaknesses.

In addition ‘Policies and Models’ are introduced into the platform, to allow conformance to corporate security mandates, and to have the system focus on specific parameters, such as particular attacks, key assets, or custom challenges.

Finally, the ‘Reporting and Dashboard’ module collects results into a database and is parsed into an easy-to-understand user dashboard.  Users can navigate to various screens and select an appropriate level of detail to analyze and diagnose threats. Data can also be written into various formats such as syslog, email, sms, and can also be sent to SIEM or Managed Security Service (MSS) correlation engines for further examination.

Cognitive Architecture

Cognitive Security’s product range offers an unprecedented level of visibility into an intruder’s activities. It is analogous to ‘turning on the light, and surprising the cat burglar’.  With the use of artificial intelligence, and game theory, this platform provides administrators and security practitioners the ability to quickly assess and mitigate attacks that have traversed their perimeter. These core competencies are offered through a range of products and services called Cognitive Analyst.

Cognitive Analyst provides a highly-interactive web interface that allows an administrator to continuously monitor the status of their network. By using artificial intelligence, Cognitive Security accelerates the identification of zero day exploits, botnets, or modern malware attacks, that may be used to steal corporate assets, intellectual property, or commit fraud. The user interface supports an in-depth investigation of individual security incidents or network anomalies, allowing appropriate actions against attackers. Cognitive Analyst is based on a state-of-the-art anomaly detection methodology, and utilizing the Cooperative Adaptive Mechanism for Network Protection (CAMNEP) algorithm. CAMNEP is based on the latest advances in the field of trust modeling and reputation handling. The platform utilizes standard NetFlow/IPFIX data, and does not require the need for supplementary information (i.e. such as application data, user content, etc.). Data privacy and data protection is maintained throughout the security monitoring process.

Cognitive Analyst’s products and services utilize a multi-stage detection algorithm to generate a Cognitive Trust Score (CTS), that measures the overall ”Trustfulness’ of the data. Eight algorithms are used to increase the accuracy of threats, and these collectively generate CTS for the subsequently mitigate of an attack. A selection of these algorithms are summarized as follows:

  • MINDS algorithm [Ertoz et al, 2004] The Minnesota Intrusion Detection System (MINDS) processes data from a number of flows: 1. Data from a single source IP to multiple destinations, 2. Flows from multiple sources to a single destination, or 3. A series of flows between a single source to a single destination.
  • Xu et al. algorithm [Xu, Zhang et al, 2005] This algorithm serves to classify traffic sources. A normalized entropy is assessed (i.e. establishing meaning to the apparent randomness of the data), determined by applying static classification rules to the established normalized states.
  • Volume prediction algorithm [Lakhina et al, 2004] This uses a methodology called Principal Components Analysis (PCA). It is a a mathematical procedure used to formulate predictive models. In order to build a model of traffic volumes from individual sources, values are determined based on the number of flows, bytes, and packets generated from each source. The PCA method then identifies the complex relationships between traffic originating from distinct sources.
  • Entropy prediction algorithm [Lakhina et al, 2005]  This algorithm is similar to the PCA model, but uses different features than just predicting volume. Entropy prediction aggregates traffic from source IPs, but instead of processing traffic volume, it predicts the entropy of source and destination ports, and destination IPs.
  • TAPS algorithm [Sridharan et al, 2006] This targets a specific class of attacks by classifying a subset of suspicious sources and characterizing them by three features: 1. The number of destination IP addresses, 2. The number of ports in the set of flows from the source, and 3. The entropy of the flow size. The anomaly of the source is based on the ratio between these values.

Cognitive Analyst implements seven agents, summarized into the following groups:

  • Detection agents encapsulate the above listed detection algorithms by process all flows from the local probe and use all of the anomaly and trust models to assign a trustfulness score to all flows. This score establishes flow legitimacy from a given agent.
  • Theses scores are then processed by Aggregation Agents that integrate the opinions of all local detection agents, thus building a consolidated trustfulness value. Each aggregation agent embodies one or more averaging functions (such as arithmetic average or best ordered weighted average). The Reporting and Interface Agents export the CTS in an external industry-standard alert formats (IETF IDMEF/TEXT) such as email, ticket reporting, file logs, or syslog.

Cognitive Features

Figure iii. The First Hour of Operation

Figure iii. The First Hour of Operation

Self-Adaptive & Self-Tuning

From the moment that Cognitive Analyst connects to the network, it begins to capture traffic. Within the first five minutes the self-initialization step begins, and the system captures its first traffic samples and begins analyzing data using two agents. After ten minutes a third agent is live and after thirty minutes all agents are active and processing their own trustfulness score. At the thirty minute mark the system begins self-configuration, and ‘agent replacement’ begins. This is where the Knowledge Fusion function decides which agent data will be utilized in determining the final Cognitive Trust Score (CTS). After one hour Cognitive Analyst is fully operational, and begins self-optimization and improved accuracy as time progresses. Discovering threats increase in accuracy as the system optimizes throughout its lifetime.

Management Dashboard

Figure iv. Cognitive Analyst – Main Dashboard

Figure iv. Cognitive Analyst – Main Dashboard

The main dashboard depicts an overview of the flows categorized by overall trustfulness (the Cognitive Trust Score, or CTS). Green indicates the lowest risk (ie. highest trust). Red means high risk. Various grades of risk are displayed in between. The table also shows an overview of trustfulness based on a selected timeframe (this could span minutes, days or even weeks. as specified by the administrator). An events overview tab summarizes all of the categorized traffic and provides details such as source and destination IP addresses, the type of events associated with the traffic, and total bytes, flows and events linked to those events.

The main dashboard also allows the user to quickly select the top IP address of concern, or the top ten IP targets in the given timeframe.  An events list provides further details into the top events that can be analyzed by the user.

Figure v – Cognitive Analyst – Filters

Figure v – Cognitive Analyst – Filters

Applying Filters

An overview graph can be configured using filters selected by the user. This graphical representation of filtered events allows users to quickly retrieve detailed information, and to drill down to finer details associated with an attacker’s activities and their behavior.

URL Customization

The state of the Dashboard screen has now been integrated into the URL itself.  This allows users to modify the URL in their browser to customize the dashboard and save this in their browser favorites for future access. Or it could be shared with another security administrator. The URL can be edited to include pre-set filters, displayed tab, and time period.

Cognitive Deployment

Cognitive Security offers the Analyst platform in the following configurations:

  • Delivered as a pre-installed virtual appliance
  • Corporate appliance with traffic capture and analysis including a NetFlow probe
  • OEM software module – Our development capabilities offer custom solutions for OEM or third-party product vendors.

Cognitive Analyst can be deployed as an appliance, or in combination with a monitoring service. As a service offering, Cognitive Security’s expert analysts provide security monitoring to supplement to a client security team. Several service levels are offered to address the mosaic of needs of vertical and horizontal markets. Our analysts provide the needed flexibility to an existing security department, and help them to regulate the fine balance between budgets and the need for new security layers.

Deployment options for the Cognitive Analyst enjoys a high level of flexibility, in order to enhance a client’s threat detection capabilities:

  • Corporate Extranet – To detect firewall breaches, polymorphic malware, custom attacks, botnet command & control, or unauthorized access.
  • Corporate Intranet – To monitor malicious behavior such as malware that is trying to circumvent the perimeter, insider attacks. Or to protect against disgruntled employees who are misusing assets, or violating security or corporate policies.
  • Critical Assets – Advanced persistent threats, sophisticated multi-stage attacks, & unauthorized access.

Government, CERT, Education Institutions can benefit from specific terms and pricing. Contact us for details.

Figure vi. Cognitive Analyst – Installation Options

Figure vi. Cognitive Analyst – Installation Options

Cognitive Differentiation

Cognitive Analyst detects modern attacks against corporations through intelligent, self-learning analysis of network traffic:

Figure vii. Cognitive Dashboard - Top Targets

Figure vii. Cognitive Dashboard – Top Targets

  • Strength of Eight Anomaly Detection Algorithms – This achieves a High sensitivity rate in detecting attacks at the granular level, and  Low false alarms, by using artificial intelligence in the core processing engine
  • Peer-Reviewed Detection Algorithms – Cognitive Analyst is based on tried-and-tested algorithms that have been continually recognized and researched by the scientific community.
  • Self-Monitoring & Self-Adaptation – Cognitive Analyst automatically configures itself without human intervention, and is able to begin detecting attacks in less than an hour once it has been turned on.
  • Low integration and management cost – Cognitive Analyst complements an existing security infrastructures and provides the necessary intelligence to address the growing complexity of future threats.
  • Seamless Integration – To minimize client overhead for integration and deployment, Cognitive Analyst has been architected as a passive self-monitoring and self-adaptive system and provides the necessary intelligence to address the growing complexity of future threats.
  • Resistance to Hacker Circumvention – Cognitive Analyst uses solid Game Theory Principles, to ensure that hackers cannot predict or manipulate the system’s outcome.
  • Long-Duration Trust Modeling – Cognitive Analyst compares current data with past assessments (called trust models) to maintain a high level of sensitively.

Cognitive Analyst has been actively developed with the support of the Department of Defense, in the USA.

Cognitive Highlights

The Cognitive Analyst is not only used to complement an existing intrusion detection deployment, but to also add a critical new layer in a security ecosystem.  Below are some key differentiators of the Cognitive Analyst series:

Figure viii. Cognitive Dashboard – Netflow Categorized by Threat Severity

Figure viii. Cognitive Dashboard – Netflow Categorized by Threat Severity

  • Artificial intelligence – Cognitive Analyst is a self-learning and self-adapting platform, to ensure that the normal behavior of a network can be distinguished from attacks. Our solution overcomes the challenge of detecting these anomalies amongst the chaos of network traffic. A.I. also frees the administrator from manually managing network security on a 24×7 basis, and relying on human resources to find ‘needles in the haystack’.  The moment that Cognitive Analyst is installed it begins a continuous tuning processes, resulting in increased accuracy of threat detection as time progresses.
  • Cognitive Analyst does not use signatures!  Our product uses a dynamically created set of adaptive anomaly detection models to provide the best performance in a client’s network environment. It is not subject to the limitations and timeliness of signatures updates that may negatively affect other security devices. Namely, hackers will exploit the timeliness of a company’s delay in implementing signature updates. They will “sneak in under the radar” before a system is patched with new signatures. For this very reason, Zero Day attacks have continued to proliferate in modern exploits. When taken from the perspective of criminal logic – why tell a supplier that they discovered a vulnerability and give them time to patch it, when the criminal could spend that time to create an exploit, and enjoy a window to deploy their attack?  Such criminality can result in the theft of millions of dollars.  Suppliers then panic and try to patch their vulnerabilities in the midst of an attack.
  • Eight independent anomaly detection algorithms are used for optimal coverage of the full threat spectrum. Key algorithms have been peer reviewed and independently validated.
  • Multiple levels of decision-making agents are deployed to ensure that the system can automatically adapt to the dynamics of a deployed network, and its organically changing environment. This reduces operational and integration costs.
  • Game Theory methods – As some of the implemented algorithms are available to public research communities and are peer-reviewed, the game theory approach is necessary to ensures that the system always stays one step ahead of the attackers, and never allows them to predict the behavior of Cognitive Analyst.
  • Low Costs Integration – Cognitive Analyst uses readily available NetFlow data and does not require integration with any other data sources, or neighboring security products.  The Cognitive Analyst allows for easy system integration, and management through an easy-to-use dashboard.
  • Maintaining Privacy – Since only NetFlow data is used, a client’s private data traversing the network is protected.  Cognitive Analyst does not perform content analysis thus addressing client concerns regarding data protection or privacy, corporate policy, or regulatory compliance
  • Data availability – Network flow, or NetFlow data specification is a de-facto standard and has been extended and codified as IPFIX by RFC 5101. NetFlow is available from a wide range of network appliances from all major vendors. It is provided by most enterprise grade Cisco routers, network switches (Enterasys, HP ProCurve) and by dedicated hardware and software probes provided by independent vendors. NetFlow is aggregated over the period of several minutes, before being transferred to the Cognitive Analyst system for investigation.
  • Mode of operation – The Cognitive Analyst processes NetFlow data in an on-line mode, with a small delay due to the network flow aggregation process. Each batch of data is processed immediately and the suspected malicious activity is then discovered and reported to the network administrators via e-mail, alert reporting protocols, logged, and/or displayed in the web interface. Alerts are available in a standard IDMEF format, text format, or a rich web format for easy analysis and quick mitigation.
  • Self-management – Cognitive Analyst minimizes operational costs by using a self-managing paradigm. This allows the system to perform run-time estimate of its expected sensitivity and false positive rate and to optimize its configuration to ensure peak performance. This process can be optionally coupled with network security policies and threat models, in order to maximize system effectiveness against the latest attack methodologies.
  • Development Expertise & Flexibility – Cognitive Security’s team of developers are now in their fourth generation of Cognitive Analyst. This is a culmination of four years of product advancement and solidity. We have built a self-repair mechanism that transparently restores individual components in case of any failure, protecting the rest of the system from degradation. Our relative size allows us to be flexible to client needs, and quickly turn-around tailored software features.

Synopsis

Virtually every corporate, non-profit institution, or government organization in the world is dependent on network access and information services. Securing these networks and systems against automated and hacker-driven attacks is becoming critically important with the growing number of information and asset misuse. Current security solutions are surprisingly fragile when facing today’s sophisticated adversaries.  Cognitive Security differentiates itself by providing clients with an advanced platform built on artificial intelligence, and sophisticated modules for auto-configuration and self-tuning.

The Cognitive Analyst has been actively developed with the support of the Department of Defense, in the USA.

About Cognitive Security

Cognitive Security specializes in Network Behavior Analysis, allowing businesses to identify and protect themselves against sophisticated network attacks. Cognitive Security offers solutions designed to fill the security gaps left by the current generation of network security tools.  Our expertise in Network Behavior Analysis allows us the ability to accurately exposes both known and unknown attacks.  Our solution is ideally suited for the detection, prioritization and handling of modern-day attack patterns that would typically bypass or evade a client’s defenses.

Contact us at www.Cognitive-Security.com, or info@cognitive-security.com for more details.

About the co-Author

Gabriel Dusil oversees the global sales & marketing strategies of Cognitive Security, with a mandate to expand the company’s presence across Europe, the USA, and beyond.

Before joining Cognitive Security, Gabriel was the Director of Alliances at SecureWorks, responsible for partnerships across Europe, Middle East, and Africa (EMEA).  Previous to SecureWorks, Gabriel worked at VeriSign and Motorola in a combination of senior marketing and sales roles.

Over nearly two decades, Gabriel’s experience has encompassed the development and management of international partner programs, EMEA marketing & sales, and business development.  Gabriel has also lectured in security, authentication, and data communications, as well as speaking in several prominent IT symposiums.

Gabriel obtained a Degree in Engineering Physics from the University of McMaster, in Canada and has advanced knowledge in Cloud Computing, SaaS (Security as a Service), Managed Security Services (MSS), Identity and Access Management (IAM), and Security Best Practices.

Tags

Network Behavior Analysis, NBA, Cyber Attacks, Forensics Analysis, Normal vs. Abnormal Behavior, Anomaly Detection, NetFlow, Incident Response, Security as a Service, SaaS, Managed Security Services, MSS, Monitoring & Management, Advanced Persistent Threats, APT, Zero-Day attacks, Zero Day attacks, polymorphic malware, Modern Sophisticated Attacks, MSA, Non-Signature Detection, Artificial Intelligence, A.I., AI, Security Innovation, Mobile security, Cognitive Security, Cognitive Analyst, Forensics analysis, Gabriel Dusil


[1] Internet Protocol Flow Information Export, IPFIX RFC 5101 and RFC 5102 are derived from the NetFlow version 9 RFC, were created due to the need for a universal standard of exporting for IP flow information from routers, and other network connected devices.

Cognitive Security – Anatomy of Advanced Persistent Threats

Graphic - Cognitive Security, Anatomy of Advanced Persistent Threats (title)

Synopsis

“Advanced Persistent Threats”, or APTs, involve low-level reconnaissance and exploitation of security perimeters in order to collectively launch a targeted & prolonged attack. The goal is to gain maximum control into the target organization. APTs pose serious concerns to a security management team, especially as APT tool-kits become commercially and globally available. Today’s threats involve polymorphic malware and other techniques that are designed to evade traditional security measures. Best-in-class security solutions now require controls that do not rely on signature-based detection, since APTs are “signature-aware”, and designed to bypass traditional security layers.  New methods are needed to combat these new threats such as Behavioral Analysis. Network Behavior Analysis proactively detects and blocks suspicious behavior before significant damage can be done by the perpetrator. This presentation provides some valuable statistics in the growing threat of APTs.

Download the Original Presentation here:

Portfolio – Cognitive Security, Anatomy of Advanced Persistent Threats (’12).pptx

View the PDF version here:

[slideshare id=22478702&sc=no]

SecureWorks – A Compliance Framework for Credit Card Security

Graphic - SecureWorks, A Compliance Framework for Credit Card Security (title)

Synopsis

As the saying goes, “if you don’t know where you’re going, you’re certainly not going to get where you need to be”. This is certainly applicable to the efforts of many security practitioners aligning their strategies and enterprise infrastructures to comply with PCI DSS (Payment Card Industry Data Security Standard). As outlined in this presentation, the payment industry is faced with an increase in data breaches. This highlights the need to maintain a robust data security standard that protects the consumer, and their personal data. Though PCI DSS compliance, stake-holders can create an environment that lends itself to a high benchmark in security best-practices,  and minimizes the tendency of implementing reactionary solutions.

VeriSign – Meet Your Colleague

Portfolio - VeriSign, Keynotes (Meet Gabriel Dusil, '05, title), Building Trust From the Inside Out

Name:  Gabriel Dusil
Title:  Director Partnerships, EMEA
Division: VeriSign Security Services
Location:  Europe, Middle East, Africa
Interview:  30th November, 2004

Talk about your history at VeriSign.

I joined VeriSign back in August 2000 as the Marketing Director for EMEA. I was recruited from Motorola, where I was managing a small team of marketers. When I first joined VeriSign, EMEA affiliate expansion was in the middle of all its glory, but I needed to adjust to the change in corporate culture—from Motorola to what was at the time a “medium-sized U.S. corporation—and to the one-man-band aspect of my new role. VeriSign had recently bought Network Solutions earlier that year, and was busy with that integration. It was an exciting time. I soon learned that I was the first marketing hire outside of the U.S., and overall, the sixth employee in EMEA. So the position had good exposure within the organization, and a great opportunity for me to prove my capabilities. I also learned that I was joining a team of people who were the best in the industry. Working among the best of the best helped me raise my own bar.

With around 800 employees now outside of the U.S., it is now easy to forget that five years ago we were truly a European start-up. It has been amazing to see our growth over the years.

What’s a typical day at work like?

There is one common theme to my work day at VeriSign – each one is different. For my first three years, it was about delivering marketing programs and supporting our affiliate partners across the region. In the past year and a half I have taken on a sales position which has led me to pipeline management, customer expectation management, and overseeing the sales life-cycle to closure. I find that an integral part of this process often involves building and maintaining the customer’s confidence in buying from VeriSign, and ensuring a successful installation. In a recent Unified Authentication Services (UAS) deal, where we were offering two factor authentication services, we had no less than 25 critical issues to solve. And any one of those issues could have collapsed the deal. A team effort is the only way to ensure all these issues are solved, and this is exactly how we were successful in closing the deal – and today the customer is extremely happy with our performance.  In fact the prospect was our first retail bank installation in the world – Alpha Bank, Greece, via our partner Adacom.

I spend about 50% of my time on the road. My days start at 9:00 and often finish as late as 10 p.m. My curse is that I can’t go to bed without clearing my email inbox!. Time zones play an important role in our daily activities.  I speak to the Middle East and Eastern Europe during the morning which is 2-4 hours ahead of the U.K., then start discussions with the U.S. East coast by mid- afternoon, and talk to Mountain View, California in the evenings (8 hours behind the UK). Many of us in the European organization say that we “live VeriSign”. If you love your job then hard work is balanced by satisfaction.

What do you like the most about working at VeriSign?

As a self-proclaimed tech geek, one enjoyable aspect of my role at VeriSign is learning about IT, the Internet, and the security industry. Whether it’s sales or marketing, I have found that both disciplines give me the freedom to learn more about technology, in as much detail as my aging mind allows. I feel that VeriSign is on the leading edge of securing critical infrastructure. So I can experience a multi-faceted approach to learning new areas of our portfolio–whether it’s Anti-Phishing, cloud based authentication services, managed security services (MSS), or any new and exciting service coming down the line from our product managers.
From the sales perspective, I like the discovery phase of learning the customer’s requirements, and how they are positioning their infrastructures to the future needs of their target market. It seems to be more of an art than a science.  For example, listening to what is not said is a much harder challenge than concentrating on what was said. The devil is always in the details, and learning these nuances has also been fun. With my marketing hat on, comparing this street-level data to what the analysts are saying about the market is a great comparison. The markets where VeriSign plays allows for an incredible scope of learning, to as much detail as I choose.

What does “One VeriSign” mean to you?

“One VeriSign” for me is about people, and how we develop relationships within the company, departmentally, geographically, or by product portfolio. Of course, we have a “One VeriSign” corporate vision, but it’s the trust relationships which we develop at the street level where the vision is realized.

What are some of your hobbies, volunteer efforts, & outside interests?

Not to shock anyone, but I suppose one of my hobbies is fitness (trigger for Souheil Badran to laugh out loud). I call it a “war with my body” – which some would agree is something I am losing. I believe that balancing the mind and the body are important to one’s health and happiness, and, for me, daily exercise counter-weights the daily intellectual challenges of work.
Much of my recreation is now spent as a parent. Watching my two boys—eight months old and three and a half years old—grow and develop.  My children have been a life changing experience, as any parent can attest. I suppose the extent of my volunteer efforts is in babysitting my kids so that my wife can go out with her friends.

Tell us one thing about yourself that your colleagues wouldn’t have guessed!

I suppose a shocker to most people is that I am a black belt in Shotokan Karate ((???). Although I have not been  training for the past 10 years, I still release my pent-up energy on a punching bag twice per week at the local gym — another aspect of balancing my mind and body. 

About the Interviewee

Gabriel Dusil is VeriSign’s Marketing Director responsible for the Europe, Middle East and African region. Mr. Dusil’s role includes the management of Channel and Direct Marketing, as well as Marketing Communications. His responsibilities also include the development of product strategies and market positioning throughout the emerging markets.

Prior to VeriSign, Mr. Dusil had been with Motorola for six years, as their EMEA Marketing Director for its Internet and Networking Group.  He has over 10 years of experience in the communications industry, and over nine years of international marketing experience. Mr. Dusil has a degree in Engineering Physics from the University of McMaster, in Canada.

VeriSign – Milestones for Sustaining eBusiness Growth

Portfolio - VeriSign, Milestones for Sustaining eBusiness Growth (title)

Adapting to Change

As a new year unfolds, we continue to observe continued growth in eBusiness opportunities.  Even as the Internet expansion trend overcomes slumps in world economy, and defies political and religious strife, the momentum generated so far is force too big to ignore.  Regardless, continued growth is faced with tangible milestones if this trend is to sustain itself throughout the decade.

  • Achieving End-to-end Connectivity – Although the internet may be considered “end-to-end”, many companies continue to struggle with connectivity within their own enterprise.  Let alone achieving the same goals externally with their suppliers, partners, and customers.  Despite the false start of Supply Chain Management initiatives (SCM), and struggles to achieve Business Process Re-automation (BPR) in the late 90’s,.such solutions still offer promise.  Although this promise has somewhat morphed into up-in-coming business solutions such as Web Services.
  • Implementing Change in Corporate Culture – In moving to new business models, companies need to instill fundamental changes to how employees conduct their daily activities.  New processes and procedures need to integrate into corporate culture, in eBusiness applications, as well as the eSecurity protecting this infrastructure.  This is undoubtedly a significant challenge for large enterprises, than for smaller, more agile businesses.  The migration of an enterprise to online commerce, or the implementation of a modern security infrastructure such as managed security services, or Identity Management.  The effort for management to implement such new solutions is a mere 20% technical verses 80% organizational.
  • Implementing Enterprise-wide Identity Management – A continued expansion of channels relationships now dictates a more aggressive approach to identifying the individuals we conduct our daily business.  Identity management has made it’s mark on a national and global scale as governments are evolving to new border control security, in efforts to flank terrorism.  Likewise, the corporate industry continues to struggle with an ever increasing sophistication of fraud methods and hacker attacks.  Individual enterprises need to establish trust in authentication (who you are) and authorization (what I will allow you to do).  For these reasons, identity is fundamental to an ever increasing importance of establishing trust.  This is the ultimate drivers;  Trust in knowing who is transacting with you, and trusting who you are allowing into your confidential data.  Especially in the modern age of business relations between people which have never met in person, and may never meet face to face in their lifetime, for that matter.  Trust is fundamental to eBusiness growth.

Government Connectivity

If the nineties were about bringing enterprises online, then this decade is surely dedicated to Government achieving the same.

In Germany for instance, the public sector is driving the “BundOnline 2005” initiative which is targeted to offering 24×7 eGovernment services to citizens.  The German government is set to invest € 1,65 billion until 2005 in order to migrate online its 400 public services. This offers potential for online payment, electronic form signing and data security through digital signature.  200,000 German employees of ministries and federal agencies will be supplied with smartcards and readers by 2005.  A quarter of the 400 targeted services — including, for example, bidding for federal procurement contracts – are expected to utilize electronic signatures.

European countries are working towards similar initiatives by the same time frame, of which many countries; France, Greece, Sweden, Denmark & Netherlands have already begun.  The need to offer nation-wide online services in G2B (government to business), G2C (government to citizen), and G2E (government to employee) are driving towards the use of eSecurity for authentication, encryption, trust and non-repudiation.  Initial adoption of online services has begun in services such online Value Added Tax (VAT) reporting for businesses to government.  A classic win-win approach to online G2B, as it creates cost and time efficiency to both the enterprise and the government.  In addition, on a regional scale, local municipality portals are being created for the purposes of allowing citizens 24×7 access to government services, as well as the use of identity cards for access to various community services .  Examples to date have revolved around online form submissions for social services, income tax, land registration, and other legal documentation. It sure beats standing in line for 4 hours, at your local city hall.

Aggressive internet initiatives are forthcoming in the healthcare sectors as well.  Online medical services are being driven by several reasons:

  • Privacy of patient records required to meet legislative requirements, and well as European Union (EU) data protection expectations
  • Ensuring the integrity of online prescriptions, validated by a digital signature, authenticating the identity of the patient’s doctor.
  • Electronic workflow in the communication of patient records via the internet, as well as transfer of prescriptions online.  This also applies to the pharmaceuticals industry, interested in using eWorkflow solutions and identity management solutions to streamline their time to market for new drugs.
  • End-to-End Connectivity – Online connectivity of hospitals, doctors, pharmacies, and patients continues to be the vision moving forward.
  • New services capabilities enabling faster response times and enhancing communication methods through mobile devices is considered longer term milestones.

Such eHealthcare initiatives may be monumental but not insurmountable.  Comparing our three eBusiness milestones, end-to-end connectivity may be the least of our worries, through well-established co-ordination between the public and private sector.  Establishing identity and privacy are clearly fundamental to healthcare projects, and various legislative initiatives in both the USA and Europe have stepped up to mandate such requirements.  Cultural acceptance will be incrementally achievable through a solid communications strategy, and migration plan.  Ultimately, the ongoing sophistication of internet usage will be the catalyst in modernizing an existing healthcare infrastructure.  But time-scales for such migration should be realistic.

Both the public and private sectors are driving towards change and modernization. Although it is fair to say that the private sector is ahead of government initiatives by at least five years.

Consumers and the Enterprise

Business drivers are somewhat different for the corporate market.  Whereby the private sector is focuses on market share, profitability, and channel expansion.  The public sector mandates revolve around service quality, efficiency, and legislation compliance.  Although governments are not motivated by profit, there is a significant amount accountability and measurement to ensuring the success of G2B, G2E, or G2C initiatives.

The Private sector accelerated forward in eBusiness initiatives throughout the nineties, widening the gap of sophisticated compared to government initiatives at that time.  Even today Business to Business (B2B) eMarketplaces continue to fuel success in selected markets, whereby Business to Consumer (B2C) is still in its infancy with respect to revenue and profit expectations.  This is partially attributed to the lack of technical sophistication of online consumers today (i.e. the challenges of educating the greater public to the overly complex usage requirements of computers and the internet).  In many societies these struggles continue, but times are changing, and mass-markets have proven resilient in its ability to adapt.  This is evidenced by the estimated one billion online users expected by the year 2006, from the 600 million users today.  Modern society continues to adjust to the Internet culture, but possibly not in the same time scales we all desire.

Planning Next Steps

The gap between eSecurity threats and corresponding countermeasures continues to grow.  Attacker continue to find new holes in our networks and applications, and we just can’t seem to plug them fast enough.  This is undoubtedly a red flag, and my wishful thinking hopes that such a trend not sustainable.  As eSecurity is at the forefront of the Internet’s concerns, we need to evolve our expectations from taking a defensive role to security threats (reactive approach), to offensive measures (proactive) in order to prevent attacks before they happen.  Security prevention is a distant goal for many corporations which continue the philosophy of investing in security only after an attack.  But when the damage is done, it may possibly be irreversible. This leads to loss of revenue (which in some industries is measured in seconds of downtime), loss of time (to recover from the attack), and loss of reputation (it takes years to build a brand, but only days have it crash down on you).

An enterprise trying to manage all threats themselves, is simply unrealistic.  It’s the classic man-in-the-middle attack – You need to protect themselves from all known vulnerabilities, whereas the attacker only needs to know the one vulnerability which compromises your fortress.

  • Where do we find the expertise to block all threats?
  • Where do you find the time to ensure 24×7 protection?
  • How does the enterprise source the adequate funds to protect ourselves?

Both the public and private sectors should consider security solutions outside of their fortress to find these answers. For instance, Managed Security Services (MSS), offers a central Security Operation Center (SOC) of experts to assess vulnerabilities, threats and potential solutions.  Outsourcing eSecurity ensures a significantly lower total cost of ownership (TCO) – As much as 40%-60% savings compared to creating a department to achieve the same level of 24×7 protection.  But more importantly, CTO’s can sleep at night knowing that their network and applications are protected by the best level of defense.

As security threats meet us both inside and outside the enterprise, identity management and access control become essential elements to eSecurity strategies.  Whether it be access to internets, extranet & intranet environments, or access to critical data via your mobile phone, PDA, laptop, wireless LAN, or smartcard. Or enabling access connectivity for customers, employees, suppliers, and partners.  Customers will be challenged to choose solutions which provide identity management and access control to meet both present and future needs.  Managing privacy verses security, while maintaining a reasonable low cost of ownership (COO) will be an influencer.  Connecting disparate IT platforms,  directories, applications, and back end systems will be deciding factors.  The decision to consolidate systems such as directories and applications becomes strategic to the organization.

History has taught us that IT implementations which tear out the old, to bring in the new, just isn’t cost justified nor realistic.  For these reasons the approach to implementing new eBusiness initiatives involves an incremental migration path, and not a replacement strategy.  With today’s tighter IT budgets, investment protection for existing assets is essential.  For example, access control has evolved from a Single Sign On (SSO) approach into what is now referred to as “reduced sign-on (RSO)”.  SSO was simply impractical and unrealistic.

A focus on phased implementations is important. Especially as it pertains to end-to-end connectivity.  “Start Small, Think Big” is the latest mantra.  Consider this approach to an eBusiness deployment:

  • Understand and document your business pains and your strategy to solving them.  Treating your IT infrastructure as strategic, and taking an end-to-end view will lead to greater eBusiness success
  • Establish the right leadership, and cross departmental teams.  Understand where your organization cultural dynamics are today, and how it will affect your deployment.  Ensure that departmental owners are accountable for driving change.
  • Design your architecture and transition strategy and document in an RFP.  Concentrate on achieving a solid foundation, through internal connectivity between heterogeneous systems.  Then look externally to connecting, suppliers, partners, and customers.
  • Project plan into multiple phases.  Identify incremental milestones.  Select a list of target suppliers & short list though your requirements.  Choose your suppliers and Implement in phases
  • Treat your project as organic – constantly changing and evolving to the evolving changes in IT and demands of the market.

Achieving End-to-end Connectivity, Organizational Culture, and Identity Management, is driven by leadership.  Change is always achievable, but the real the questions is “when” rather than “if” it will occur.  Leadership will be a deciding factor towards such transformation, and help generate cultural harmony to evolving eBusiness approaches.

About the Author

Gabriel Dusil is VeriSign’s Marketing Director responsible for the Europe, Middle East and African region. Mr. Dusil’s role includes the management of Channel and Direct Marketing, as well as Marketing Communications. His responsibilities also include the development of product strategies and market positioning throughout the emerging markets.

Prior to VeriSign, Mr. Dusil had been with Motorola for six years, as their EMEA Marketing Director for its Internet and Networking Group.  He has over 10 years of experience in the communications industry, and over nine years of international marketing experience. Mr. Dusil has a degree in Engineering Physics from the University of McMaster, in Canada.

VeriSign – The PKI Value Proposition

Portfolio - VeriSign, PKI Value Proposition ('02, Symposium Globe)

The eSecurity Evolution

The Internet’s rapid growth brought forth a multitude of innovative service offerings.  In it’s early life cycle the Internet experience defined new products, and ultimately new market segments.  One of the most important of these markets in recent years has evolved around the consumer demands of “Trust”, and the value of trusting the Internet.  The industry answered this demand through a market segment now called eSecurity, and vendors worked hard to clearly differentiate themselves in this space.  Each providing either a service or a product that clearly distinguished their value proposition from competitors.  But in the last few years eSecurity has blurred the distinction between various products and services as it evolves and accelerates as the fastest growing market segment in cyberspace.  The sub-segments within this market such as firewalls, virtual private networks (VPN), anti-virus, and authentication services have become critical components of a security policy. eSecurity continues to rapidly evolving to the demands of eCommerce, as transaction based services is expected to infuse new growth trends.  This is a reflection of the increased importance that consumers give to confidence, and in the value of trust with the vendors they wish to do business.  Recent awareness towards managing and protecting privacy is a further reflection of how consumers value their supplier relationships.  Governments have also stepped up to the plate throughout Europe in finalizing legislation around the protection of privacy and the legal recognition of electronic signatures.  All these dynamics are raising the fundamental awareness of eSecurity, and the importance of high security, of which the PKI (Public Key Infrastructure) sub-segment plays a critical role. PKI has grown beyond the traditional offering of a eSecurity, and is now considered a basic enabler of new eBusiness revenue streams.  Early in its life-cycle PKI establishes itself with a clear value when compared to it’s neighbors;
  • Firewalls established the fortress for a corporation, of which intrusion detection served to enhance this capability
  • Antivirus protected hosts and desktops to the threat of infection;
  • VPN’s ensured secure communications over public networks;
  • PKI steps in to provide application level security, and removes the inherent weaknesses of ID’s and passwords, by linking the identify of users to their Internet hosts through digital certificates

But PKI goes further, and crossed the boundaries of security by enabling a host of services which were not previously enabled due to the lack of infrastructure;

  • Digital Signing of electronic documents
  • Electronic supply chain management
  • Electronic (e)Ordering & eProcurement
  • Online eGovernment Services
  • Healthcare & National ID Services

These are only a few examples of new applications which were not previously acceptable on the Internet, but have enabled new services due to the enhanced security offered of PKI. How do we bridge the gap from our current IT infrastructure, to enhanced security using PKI?  This article outlines two fundamental implementations referred to as in-house PKI and outsourced (cloud-based) PKI solutions. The purpose of this article is to describe the value proposition and intrinsic differentiation of these two approaches.

Setting the Stage

PKI is one of the few technologies today which integrates the disciplines of Legal Practices & Information Technology.  This results in several unique challenges in deployment, but also is a reflection of the distinctive nature that PKI serves the Internet. Namely, our ability to identify the existence of a company, recognize individuals through the use of digital certificates, and legally binding digital signatures to the same validity as a hand written signature. To overcome the legal and technological obstacles, implementing a PKI solution has resulted in two fundamentally different approaches, described as follows:

In-house PKI
  • This involves the implementation of a managed in-house PKI solution. In this approach the customer purchases PKI software and hardware which is used to deploy digital certificates to individuals in the company.  Dedicated staff are responsible for defining their own certificate practices and policies for the creation and distribution of digital certificates throughout the corporate infrastructure.  Companies perceive that this approach offers inherent “ownership” and flexibility. But typically this option requires a large upfront investment in both time and money.
Outsourced PKI
  • This cloud-based approach is analogous to the service provider market whereby the ownership of infrastructure is with an external entity know as a Certificate Authority (CA).  The CA is responsible for setting policy, managing information technology (IT), and owning liability on behalf of the customer.  But we don’t stop there.  The advantage here is control of their certificate issuance, co-branding, and management, while moving the responsibility of maintenance, scalability, and policy management to the back-end (commonly referred to as the processing center).

Furthermore, outsourced solutions cover all aspects of the PKI infrastructure such as:

Legal
  • Certificate Policy Statement (CPS), Certificate Practices (CP) which establishes the legal framework of PKI.  In Europe conformance is to the EU Signature Law Directive.
Technical
  • The CA maintains the ability to migrate PKI to new standards.  Since the PKI processing center is upgraded once in the back-end, all customers take advantage of new features simultaneously.  This also applies to technological upgrades such as the up-in-coming XKMS standard developed jointly by VeriSign, Microsoft and WebMethods, allowing for an open standard for PKI in XML environments.
Human Resources
  • Project management, Policy management, and certificate deployment costs are often lost in the overall cost of ownership model.  All of these costs are substantially reduced when outsourcing, since the expertise of PKI deployment are off-loaded to the CA.

Outsourcing has becoming increasingly attractive as it removes the burden of a large upfront investment, and takes the emphasis off licensing as the main revenue stream.  This has become even more important during times of economic difficulty, as cost-cutting becomes a primary concern. “The primary benefit of this [cloud] business model for end-user businesses is avoiding the administrative, project management and IT integration demands that an in-house implementation would require without relinquishing control over the solution.” Data monitor

Spending

Figure #1: In-house PKI Investment

Figure #1: In-house PKI Investment

Decisions around eSecurity spending are often compared to metrics of lowing cost, flexibility, control, and deployment speed.  In-housed deployments are sold on the perceived merits of greater control, flexibility and lower costs in the long term.  In-house certificates are expected to be issued and revoked quickly, and security policies tailored to business needs. Ironically, outsourced solutions are up and running in a much shorter time-frame, result in lower capita and operationall investment, when the total cost of ownership (TCO) is taken into account.   In fact, allowing companies to outsource their security gives them more flexibility to concentrate on their core business.  IDC estimates that the global IT management services market will expand from 95.3 billion US$ in 2000 to 214.9 billion US$ in 2005.  This is a compounded growth rate of 17.5%. The trade-off is often judged on “up front costs”, since proponents of in-house solutions have the customer compare their proposal cost to that of a cloud-based service provider.

Figure #2: Outsourced PKI Investment

Figure #2: Outsourced PKI Investment


Customers are often caught up in the shadow of proposal costs, ignoring tangible factors such as Total Cost of ownership, and Investment Protection of a given solution.  For certificate services, total deployment costs can be grouped into four main areas:

Human Resources
  • Project management costs to deploy the overall infrastructure and services
  • Operational & maintenance support includes costs associated with application integration
  • Costs of managing the Registration Authority and Certificate authority should not be overlooked
  • Human resources need to build PKI expertise and maintain these in-house systems
Infrastructure
  • Hardware and Software costs which form the basis of  the PKI infrastructure
  • Secure Processing facilities are critical to ensure that the root key (or CA private key) is protected against theft or fraudulent threats.
  • Upgrades due to technology evolution and scalability
Services
  • Training costs should be taken into account,.  Both during the initial deployment as well as further education needed as legislation and this technology evolves.
  • External consultant services are often require significant investment for an in-house solution.
  • Security Audits are required to ensure compliance to national or internationally recognised standards.
Legal & Policy Requirements
  • Trust practices which include legal conformance to local signature laws as well as establishing PKI policies and procedures
  • Liability to the company in the event of a legal dispute

Figure #1 shows the inherent costs associated with an in-house solution.  All components of a. Services, b. Human Resources, c. Infrastructure and d. Legal are the responsibility of the customer.  In this cost analysis the thickness of the bars is a relative representation of the cost incurred to the customer.  This figure shows a total cost of ownership when all costs are taken into account. When the same analysis of total cost of ownership is applied to the outsourced model, we arrive at the analysis in Figure #2.  In this model, the customer incurs a much smaller investment in human resources, consultancy, and infrastructure since the bulk of the investment lies in the Certificate Authority (CA) Infrastructure.  As part of this service offering, the customer takes advantage of the CA infrastructure as part of the service provided by the Trusted Third party.  The ownership of a carrier class processing facility, operations, and maintenance, and the legal framework become the responsibility of the CA. As a result, when combining the various components of cost – outsourcing results in a 40% to  60% savings in cost over a three year period when compared to an in-house solution (Figure #3).

In the in-house model, the customer must manage their own root key, private keys of deployed certificates, and audit logs.  In other words, since the infrastructure is not protected by a highly secure facility, there is a high risk of the CA being compromised.  This could result in fraudulent activates such as false certificate issuance, private keys being stolen, or digital signatures not being legally binding.  Also, since the company has set their own policies and practices, there is no inherent trust established with any other company which may have set different standards.  This is a fundamental flaw in what is to be consider a “trusted” environment between companies wishing to establish a business relationship.  If a true layer of trust is to be realized, then the customer must rely on a CA or Trusted Third Party (TTP), which ensure that common standards are enforced.  Policies and procedures are managed outside of the organisation – within the TTP.  Therefore, if two companies utilize the same standards of PKI from the same TTP, then they can inherently trust each other. In-housed  PKI vendors do not sell policy infrastructure as part of their PKI solution.  Customers generally need to determine their own policy – then document and implement it.  This results in customers taking the risk and responsibility of certificate issuance and authentication. Outsourcing PKI has the customer offloading this risk to the TTP.

Figure #3:  In-house vs. Outsourced PKI Total Cost of Ownership

Figure #3: In-house vs. Outsourced PKI Total Cost of Ownership

Proponents of in-house solutions attempt to convince customers that outsourcing may be viable in the short term, but there is lack of flexibility in moving to an in-house solution over time. In fact, this is a contradiction in logic, since flexibility is lacking in the in-house approach.  Customers are locked into a proprietary solution which often results in continuous hardware upgrades as more users are added, or software upgrades are needed as new standards are implemented.  An outsourced solution transfers the responsibly of managing scalability and evolving standards to the TTP, without dramatic changes to their infrastructure. In the outsourcing model the TTP is located at the top of the trust hierarchy, which may branch to smaller CA’s managed by individual companies.  At the tail-end of this hierarchy is the end-user community, which might consist of distributors, suppliers or manufactures in business to business (B2B) or individuals in a business to consumer (B2C) market.  This hierarchy imparts the underlying value which a TTP provides.  All uses within this umbrella have comfort in knowing that one consistent standard of trust are utilized.

“Outsourced PKI solutions provide a multitude of benefits for businesses. Although the underlying idea for businesses is to transfer the ‘headache’ of having to implement, maintain and administer a PKI solution to a service provider, there are significant strategic and financial advantages in outsourcing security in general and PKI in particular.” Datamonitor

Outsourcing Value Proposition

Figure #4: In-house vs. Outsourced Revenue Growth

Figure #4: In-house vs. Outsourced Revenue Growth

Further support for the cloud-based PKI model can be found from various analyst reports.  According to Datamonitor  this market is expected to grow at 110% CAGR (Compounded Annual Growth Rate) over the next three years.  By the year 2006, outsourced PKI market share is expected to be 60% compared to in-house deployments. The importance of Outsourcing can be summarized as follows:

  • Customers can focus on their core business – Leave the expertise of PKI to the experts
  • No need to buy hardware & software since the infrastructure is owned by the CA
  • There is a reduced Total Cost of Ownership – No hidden costs are incurred by the customer
  • Liability is transferred to a trusted third party (TTP)
  • Seamless scalability – Upgrades to infrastructure due to additional users and technology changes are owned by the CA
  • There is a reduction in training, hardware, and software investments.  Expertise is left to the CA, so only minimal training is required to administer certificates.
  • Minimize consultancy fees are needed, due to faster project implementation
  • Trust is enabled with other companies.  The value of the TTP provides a common denominator of trust for all companies.

About the Author

Gabriel Dusil is VeriSign’s Marketing Director responsible for the Europe, Middle East and African region. Mr. Dusil’s role includes the management of Channel and Direct Marketing, as well as Marketing Communications. His responsibilities also include the development of product strategies and market positioning throughout the emerging markets.

Prior to VeriSign, Mr. Dusil had been with Motorola for six years, as their EMEA Marketing Director for its Internet and Networking Group.  He has over 10 years of experience in the communications industry, and over nine years of international marketing experience. Mr. Dusil has a degree in Engineering Physics from the University of McMaster, in Canada.

Motorola – From Convergence Hype To Multiservice Reality

Portfolio - Motorola, From Convergence Hype (title)

The Biggest Shift In Communications Is Just Around the Corner

Packet switching technology is starting to hit traditional Public Switched Telephone Networks (PSTN) where it hurts the most – in the wallet. Investment in circuit switching infrastructure is slowing down; Dataquest states that the telecommunications industry has grown a mere 8% in the last two years, compared to a staggering 23% growth in the data communications sector.  Some areas have shown a 100% increase in growth, such as the Computer Telephony Integration (CTI) market.

This is mainly due to the enormous cost and functionality benefits by putting all communication applications – voice, email, business critical applications, and video – over one multiservice network using packet switching.

Analysts estimate that subscribers may expect as much as 40% off their existing phone bills by sending voice in packets (over a Frame Relay [FR] or the Internet Protocol [IP] network), rather than a dedicated point-to-point connection over the PSTN.  These cost savings come from a more efficient use of bandwidth due to the packetization and compression of voice, together with sophisticated queuing mechanisms to ensure a high quality voice conversation.

Table #1:  Bandwidth Requirements in IP

Table #1. Bandwidth Requirements in IP

Table #1. Bandwidth Requirements in IP

When using a switched circuit connection, the subscriber pays for the duration of the call, including hold time and natural pauses in a conversation.  With packet voice, the subscriber only pays for “bytes”, (i.e. the amount of data sent), not the duration of the connection.  Hold time and pauses are subsequently not chargeable.  In addition, “redundant” voice information is removed by using sophisticated compression algorithms such as the ITU-T Standards, G.729A, and G.723.1.  In these algorithms, real-time speech analysis will remove up to 75% of a conversation, then the remaining 25% is compressed.

A typical “before and after” comparison of a company that migrates to using packet voice will see a huge saving where phone bills will drop to a tenth the original cost, from 800 US$ to 80 US$ per month.   Realizing financial benefits from voice compression is only the first step in convergence.  Multiservice networks offer support for technologies above and beyond the traditional telephone network:

Multimedia

  • Supports real-time applications such as voice & video as well as efficiently managing various flavors of data applications over a single, shared network.

Multi-application

  • Simultaneous administration of World Wide Web (HTML) traffic, Computer Telephony (CTI) applications, and Unified Messaging in a single multiservice environment.

Multi-network

  • Frame Relay [FR], Asynchronous Transfer Mode [ATM)] and Internet Protocol [IP] support as well as connectivity to Private Broadcast Exchanges (PBX), and the various circuit switched networks such as EuroISDN.

Multi-Protocol

  • IP, IPX, Business Critical Applications such as IBM SNA, Burroughs Poll Select, Airline Control Protocol, BSC, and various other Async protocols.

It is not only cost savings that make multiservice networks the savior which will likely take over the PSTN market in the next two years.  A multiservice network can provide all the features of a traditional PSTN, such as call waiting and conference calls, plus meet the demands of carrying video, provide web access and link to unified messaging systems, and offer a single management platform for a single multimedia infrastructure.

Migrating Intelligence

The telecommunications industry is in the process of migrating from a proprietary industry to an open industry.  This trend can be viewed from the perspective of how the PC industry had developed over the past 30 years.  In the 70‘s companies such as IBM (International Business Machines) and DEC (Digital Equipment Corporation) provided end-to-end solutions, from the mainframe, to terminals, from cables to applications.  The entire infrastructure was sourced from a single manufacturer.

As the PC industry took-off in the 80’s, an open industry began to emerge whereby manufactures offered one piece of the solution pie.   One manufacturer would focus on PC sales only, another sold servers, another installed cables, and yet another specialized in value added services.

The PC industry evolved from the hierarchical networks to client server.  When corporations needed to increase the size of their network then a bigger mainframe was required.  Today, when we need to increase the size of the network we buy another server, and a few more clients (PC’s).  This is an integral part to the success of client server architectures, in that network expansion is progressive, and is not financially discouraging.

Figure #1. Price Erosion in IT, verses Communications

Figure #1. Price Erosion in IT, verses Communications

Another dynamic is found in Moore’s law[1].  We have enjoyed the 18 month half-life trend in the PC industry, but have not been privy to this trend in communications.  For example, PC prices have decreased by half every 18 months, but our phone bills have remained relatively the same over this period.  Typically the “half-life” of communication prices has been 5 years (See Figure #1). Thankfully, all of that is about to change due to convergence.  As the new convergence industry takes advantage of traditional PC architectures as follows;

  • CPUs (e.g. Motorola PowerPC and Intel’s Pentium)
  • Bus (e.g. PCI, Universal Serial Bus [USB] and IEEE 1394 – Firewire)
  • Operating Software (e.g. Microsoft Windows & LINUX)

The same “price erosion trends” will be observed, and Moore’s law will apply to convergence networks as well.

As routers evolve to multiservice, and proprietary hardware migrates to PC based hardware implementations, and as corporate networking solutions move to the commodity market, the benefits will be reaped by the end user.  With the communications industry opening up to new providers, the market will become increasingly more competitive.  Consumers will soon enjoy this cost cutting, and competitively aggressive market in our communication hardware and software.

Since the current trends in the communications industry are now mimicking the PC industry, the observations are in the fact that networks are moving from proprietary to open solutions, and mainframe solutions to client server.  This also implies that intelligence is moving from the core to the edge of networks, whereby the edge, in this case, is represented by the ingress or egress communication device (See Figure #2 and #3).  The Multiservice Access Device can be considered analogous to the client, responsible for security, quality, and reliability of communications.  What now represents the server is a “gatekeeper”, responsible for bandwidth management, registration, administration, and status of the network, as well as management of traditional Telco services such as billing.  The Gatekeeper and multiservice devices manage communications as servers and clients respectively, ensuring the end to end transfer of information.

Figure #2. Characteristics of a Traditional Voice Network, PSTN

Figure #2. Characteristics of a Traditional Voice Network, PSTN

Voice Network Characteristics:

  • Intelligence in the backbone
  • Mainframe Based Management (AIN:  Advanced Intelligent Networks)
  • Highly scalable to hundreds of thousands or even millions of users
  • Monopoly based solutions (One provider per country)
  • Circuit Switching
  • Highly reliable and stable
  • Poor support for Data transfer over the Voice network (e.g. non-guaranteed speeds of 56 Kbps V.90 modem communications)

Although client server will emerge in the new world of convergence, the mainframe concept will not diminish (For the same reasons mainframes have survived today in the PC industry;  reliability and stability[2]).  To bring this into perspective, as client servers scale well for thousands to tens of thousands of subscribers, mainframes have traditionally scaled to hundreds of thousands of subscribers.  Client server simply will not match the processing required for a full-scale voice infrastructure.  For this reason, intelligence will remain in the core of converged networks (Voice networks will remain hierarchical as a whole), but client server will emerge, and add value at the edge of these infrastructures.

So, what is the impact to the subscriber?  Simply, we will have more choices in the providers we choose, the connection type, the services we want, and even the level of quality for the traffic we will transmit.  We will even have the choice of which wire type will carry our telephone calls (in the past we were restricted to a simple 2 wire copper pair).  We could run our voice over standard Ethernet coax, home cable, over optical connections, or any flavor of copper that exists, such as unshielded twisted pair (UTP) or shielded twisted pair (STP), and all of the categories in between.  To the Managers of Information Systems (MIS) this is a blessing in disguise.

Figure #3. Characteristics of Converged Networks

Figure #3. Characteristics of Converged Networks

Data Network Characteristics:

  • Intelligence throughout the network
  • Mainframe Management in the core, with Client Server Based Management at the edge
  • Highly scalable   Mainframe scalability to millions of users, Client-Server scalability to tens of thousands of users
  • Open solutions for the subscriber (many providers per region offering various levels of services)
  • Packet Switching
  • Excellent support for voice and video over data networks (e.g. H.323 support with QoS)

In the past we have relied on one type of infrastructure, one type of connection, one type of telephone, and one provider servicing all our needs.  All this is about to change.  Not only will the industry change, but more importantly; the minds of the consumer.  The bottom line will stem from the expanded ability of consumers choosing from a much broader range of products and services.

The Wireless Craze

As a society, we have evolved from radio to TV, from black & white photography to color  from analog to digital networking.  The next wave of change is wire-line to wireless communications.

The advent of Wireless Information Devices (WID) is being driven by the increasing demand for multi-media capabilities in all aspects of our lives.  In response to this, Voice and Data providers respectively are investing heavily in the development of technologies that deliver mobile multimedia services.  Generation 2.5 services such as GPRS (General Packet Radio Services), are the next evolution in the development of packet based wireless networks[3] slated to provide data services to mobile phones and hand-held communication devices.  UMTS (Universal Mobile Telecoms Service) Generation 3, will offer users an alternative in high-speed access for multimedia communications (See Figure #4).  Web-based services will move to various hand-held communicators, allowing connectivity to the world, from any location on the planet.

Figure #4. The Wireless Evolution in Europe

Figure #4. The Wireless Evolution in Europe

These next generation wireless technologies are packet based rather than circuit switched. The communications industry has realized the potential of packet technology.  Subsequently the investments in developing circuit switched infrastructures have ground to a halt.

The Need for Liberalization

With an existing investment to the amount of billions of dollars in PSTN infrastructures, it is not surprising that the ILEC’s (Incumbent Local Exchange Carriers) are reluctant to compromise their existing revenue streams.  If Investment Protection is the name of the game, then providers will retain their pseudo monopolies across Europe for as long as possible, so as to enjoy existing tariffs in circuit switching communications, before packet technology is widely adopted.  Nevertheless, emerging providers in convergence are beginning to enter the European market from the USA and the Far East such as:

  • Competitive local exchange carriers (CLEC’s), such as Cable and Wireless in the UK
  • Internet Service Providers (ISP’s), such as America On Line (AOL)
  • Internet Telephony Service Providers (ITSP’s), such as Zephyr Communications
  • and, Network Service Providers (NSP’s)

These competitors are giving the incumbents a run for their money, with their lower cost and high margin per call, lower initial infrastructure investment, while matching or even exceeding value added telephony features.  The strategy of these players is to capture early market share before the incumbents have a chance to look over their shoulder. This is a conventional tactic in the competitive world of the IT industry.  Smaller competitors, by default have the agility to dynamically align themselves to the requirements of the industry.

In the face of stiff competition, many incumbent providers will adopt packet technology, albeit with a much slower adoption curve, simply due to the massive momentum of large companies, making it harder to veer from an existing strategic path to a new one.  However, once ILEC’s enter into the convergence race, it will be difficult for competitive providers to capture additional market share.

Corporations will be the first customers to observe the benefits of convergence.  But, until the market is fully deregulated in Europe, the benefits of packet voice and multiservice networks will not be fully realized in the eyes of the consumer.  The future of communications will most definitely be packet-based, though until the stronghold that the incumbent providers have is loosened, the spread of multiservice networking will be slower.

Getting the Quality You Expect

As deregulation in Europe continues, another parallel issue is in the quality of voice calls in data networks.  Quality of Service (QoS) is the hot topic of discussion.  It is most certainly the first issue mentioned when discussing the latest in Voice communications over data networks.  Currently, the Internet is unable to handle the demands of real time traffic such as voice or video.  Analysts predict that it will be three to five years before the appropriate changes are implemented.

In the interim, multiservice networking offers substantial benefits in private corporate networks today.  The bandwidth requirements to send voice over a corporate WAN (wide area network) are dramatically lower in comparison to the demands of web browsing or file transfers.   Moreover, quality of service enhancements can be realized more readily in this controlled and somewhat predictable environment.

How an enterprise customer implements quality of service depends on the transport protocol being used in the network.  The choice at the moment comes down to two: Frame Relay or Internet Protocol (IP).  Although analysts may argue that Asynchronous Transfer Mode (ATM) is a third alternative, ATM has yet to prove itself as an end to end protocol (with its’ minimum speed of 2 Mbps) in a world where 33.6 Kbps V.34 modem connections continue to dominate the last mile.

Frame Relay ensures quality of service in the core architecture of its standard.  Committed Information Rate (CIR), Discard Eligible (DE), Backward and Forward Explicit Congestion Notification (BECN and FECN respectively), are parameters used to guarantee bandwidth, prioritize traffic, and minimize congestion, respectively.  Additional intelligence is architected in Motorola’s Multiservice Access Device, which implements:

  • Sophisticated queuing mechanisms to minimize delay between packets
  • Provide prioritization enhancements, and
  • Ensures consistent flow of real time applications.

For example, as real-time applications are prioritized over data packets at the access node, and as information moves along the virtual circuit from source to destination then Frame Relay ensuring that:

  • Data is received with zero error at the expense of delay, and
  • Voice is received with minimal delay, at the expense of errors
Figure #5. Converged Industries

Figure #5. Converged Industries

In essence, Frame Relay avoids much of the packet delay and packet loss that is observed in IP.  Packet delay can occur with IP due to the protocol’s lack of flow control, lack of congestion control mechanisms, lack of queuing schemes, and lack of prioritization of packets.  However, Quality of Service (QoS) techniques such as Differential Services (DiffServ), and IEEE 802.1p/q will provide workable solutions in the interim, until IP version 6 becomes available.

With the growth of the Internet, Intranets, and Extranets, in the past decade, IP has also become the default choice for networking.  But, for a protocol which is over 25 years old, IP was clearly not designed for the demands of modern communications. This is a clear example of how a superior technology may not necessarily win the mind share of the consumer, or the market share of the industry.  Case in point:  The Betamax verses VHS battle in the 70’s, and the 1980’s battle between CISC processor (Complex Instruction Set Computing) vs. RISC processor (Reduced Instruction Set Computing).

Frame Relay, which was developed in the early 1980’s is currently the most efficient protocol in the world today.  There is no communication standard today for Wide Area Networks (WAN) which can match the efficiency and flexibility of Frame Relay.  Many enterprise “early adopters” have used Frame Relay, where possible, as the vehicle for real-time voice inter-operating with IP in the remaining locations.  To ensure investment protection in these heterogeneous networks corporations have chosen multiservice access devices, which have the capability to connect to either Frame Relay, ATM, or IP, and ensuring that the proper QoS solution is supported to guarantee trouble-free network expansion in the future.

Inter-operating With the Rest of the World

For multiservice networks to be adopted on a large-scale, multiservice devices from different vendors must inter-operate   Especially with the anticipation of how networks will look in the near future.  We are in the process of moving beyond the simple phone-to-phone connection, to an entirely new communications market that is redefining the meaning behind “connectivity”.  The industry is overcoming the following challenges as it works towards a single converged network:

  • Merging of the Voice, Data, and Media Industries (See Figure #5).
  • Connecting various ranges of IT Equipment, and ensuring transparency in connecting them together.
  • Integrating a varying range of applications, with inter-operability, management and conversion between relative standards.
  • Connecting a plethora of communication devices together in a single heterogeneous network, and making sure that the entire network is manageable.
  • Connecting various Network Architectures, so that signaling is controlled, and stability is maintained.

To make these changes happen, the major industry must have a common goal of open standards.  Initiatives such as the committee for Telecommunications and Internet Protocol Harmonisation Over Networks (TIPHON) and other international bodies on inter-operability testing, such as iNOW (Interoperability Now) are critical to the success of convergence.

Despite the enormity of the foreseeable changes, it is clear that convergence is in the embryonic stage of its life cycle, when viewed from a historical perspective.  But the changes will be observed at a much faster pace than ever before.  To bring this into perspective, the rate of change in communications which we have observed over the past 30 years will be compressed into the next 5 years.

TIPHON is a European Telecommunications Standards Institute (ETSI) initiative set out to connect existing networks to a single managed network connected via IP.  TIPHON will utilize existing standards where available, and specify standards to fill any gaps.  The aim is to ensure global network inter-operability between various communication devices.  TIPHON consists of 7 Working Groups, responsible for developing various aspects of inter-operability such as Signalling, Management, Wireless connectivity, etc.

An integral piece of this initiative is the ITU (International Telecommunications Union) H.323 telephony standard.  Extensive work has been done on this specification which provides a platform for multimedia communications over packet technology.  Although the adoption of H.323 does not guarantee interoperability between third party devices, customers should ensure that the manufacturer is closely working with accredited standard bodies such as TIPHON and iNOW.

An awareness of the issues, has brought on a realisation that vendors must work together to ensure a common standard, in order to ensure success in this convergence initiative.  Stopping at standards and considering the issue closed is unacceptable, since this very much distinguishes between the “thinker” and the “doer”.  Through initiatives such as TIPHON, the industry has taken the additional step of looking at the interoperability concerns to ensure that standards are implemented properly.

Communications for the Future

Over the next two years, the convergence players will proactively work towards a common goal in solving quality of service issues, specifically focusing on interoperability.  We will then see a large number of organisations moving to capitalise on the cost and functionality advantages of  multiservice networks.  Personal communications will unite mobile voice with laptop data into a new breed of integrated multimedia devices.  The consumer mass-market implementation is just around the corner.

We have already seen a number of communications players, such as Lucent/Ascend and Bay/Nortel, joining forces in preparation for the market explosion.  International Data Corporation’s (IDC) prediction is that by 2002, packet voice revenues will equal that of the PSTN.  Motorola is already in a strategic position to offer converged solutions today with over 60 years experience in voice and over 37 years in data.

With the tectonic shift in the communications industry today, do not be surprised by the change in your Telecom Provider’s logo on next year’s phone bill.

Acronyms

  • CDMA               Code-Division Multiple Access
  • CPU                    Central Processing Unit
  • DOCSIS             Data Over Cable System Interface Spec.
  • FTP                    File Transfer Protocol
  • GPRS                 General Packet Radio Services
  • GSM                   Global System for Mobile Communications
  • HTML               Hypertext Markup Language
  • IEEE                  Institute of Electronic and Electrical Engineers
  • IP                        Internet Protocol
  • IPX                     Internet Packet Exchange
  • ISDN                  Integrated Services Digital Network
  • Mbps                  Megabits (millions of bits) per second
  • MPEG               Motion Picture Experts Group (ISO)
  • NMT                  Nordic Mobile Telephony
  • PCI                     Peripheral Components Interface
  • PCM                   Pulse Code Modulation
  • PSTN                 Public Switched Telephone Networks
  • SMTP                Simple Mail Transfer Protocol
  • SNA                    System Network Architecture
  • SS7                     Signaling System 7
  • TDMA               Time-Division Multiple Access
  • UMTS                Universal Mobile Telecom Service
  • URL                   Uniform Resource Locator
  • xDSL                  Digital Subscriber Line

References

[1] Moore’s law was invented by Gordon Moore in 1965 based on the observation that the number of transistors on computer processors double every 18 months.  As the PC industry emerged two decades later, Moore’s law had been reinterpreted to represent the processing power of computers doubling every 18 months.  When applied to IT finance, then we observe that the price of IT has a half-life of 18 months.  For example, a 200 MHz Computer purchased at 2000 US$ on January 1st, 1998, will be sold for 1000 US$ on July 1st, 1999.

[2] The overwhelming proof of IBM’s mainframe reliability is in their written guarantee of a 30 year MTBR (Mean Time Between Reboot) for the 390 mainframe series.  Compare this to a Windows NT server, which typically requires several reboots per year.

[3] An example of First Generation, Circuit Switching Analog Wireless networks is Nordic Mobile Telephony (NMT) originating from Scandinavia.  Second generation wireless networks are based on Circuit Switching Digital technology such as GSM.