Agile Big Data and Many-Particle approach change Marketing and Sales effectiveness

Big data projects have broad impact on organizations. Big Data implementation overtakes
what normally Many-Particle data aggregation goodcould be considered a new way to conduct data management to business alignment. With Big Data the path from data sources to data intelligence changes drastically. The way to design and implement data intelligence definitively changed access, ingest, distil, processes and data visualization as well. Big data projects meet agile implementation, shorten the data intelligence lifecycle by increasing services capability and adequacy to fast-growing datasets, fast moving business. Accordingly, agile practice and many-particle approach minimize data entropy together with data access time cycles everywhere, preserve data security and enhance user experience to business instant realignment.

Contents
Introduction
Data Topology and Agile Big Data
The Many-Particle approach
Conclusion
Acknowledgment
References

Introduction
The way to move from today business data into Big Data intelligence could be a costly and time consuming process that could decrease the tremendous advantage of the Big Data and Cloud paradigms. Today, information is still misaligned with the business although the huge efforts of the past business intelligence projects: companies still use partial quantities of the real corporate data heritage. As a consequence, the data spectrum exploited is unpredictable and the process to align data and business is a long-term process. Agile Big Data aligns instantly data heritage and business data. Continuous data ingestion and distillation drastically reduces ETL process to run intelligence on the “big data-lake” when needed. Then, on-premise big data topology and functional data intelligence have a crucial role to meet profitability, customer affinity and fast moving business goals. This paper introduces the business case for Big Data to avoid Marketing and Sales data entropy, reduce risks and increase the likelihood of an aware and successful Big Data implementation.

Data Topology and Agile Big Data
Documenting data evolution and updating in the past could be considered a good practice in managing data. In the beginning of cloud paradigm, due to the cost cut down attraction, the practice to have a map of the company data heritage became a great benefit especially when services have to be subscribed in the cloud. Data models, a way to document the data heritage, evolved into MaaS (Model as a Service) that supports agile design and deliver of data services in the Cloud and makes the difference in planning a Big Data implementation project.

Considering data models doesn’t mean structured data only. On-premise models map data coming from structured, semi-structured and non-structured sources. Data models maps defined services topology would be moved on-premise or in the cloud. Still data models is needed for early exploration analysis and “ab-initio” services classifying parameters which define services boundaries (to personal cloud, financial parameters or healthcare positions, for example); data models (on SQL, No-SQL, Vectors or Graph structures) essentially doesn’t address the meaning the data have but identify the services’ classes before creating the data-lake. Of course, into the data-lake converge unusable data, unstructured or denormalized raw datasources as well. The more aware is the on-premise topology, the more secure and localizable is the big data usage both on-premise and in the Cloud. Further, agile MaaS approach reveals business process affected, operating requirements and stakeholders.

Big Data CorporateFig. 1 – Corporate Data-Lake and Agile Big Data approach

Accordingly, agile Big Data practice sets the link among on-premise data topologies and on-premise or in the cloud data intelligence. Topology leverages the company services asset into specific business objectives and will determine the successful user experience requirements and the proper rapid alignment with respect to the competitors.

This means that two crucial aspects have to be taken care:

  • Data is the “compass” to understand services capacity, stakeholders, culture of the organization: big data agility is based on data-driven approach. Therefore, in the incoming project setup minimize functional data behaviour. Use MaaS topology to define projects use cases data-driven. Data-driven project design defines data ingestion architecture and data landing into the data-lake and assist in understanding the best policy for continuous data feeding. Do not disregard this aspect: accurate data feeding is the core of Big Data approaches;
  • Move data analysis and functional aggregation to data intelligence applied on the data-lake. During ingestion and data landing data treatments have to be minimized Agile Big Data approach considers 2 zones: the in-memory one, based on data topology and on-premise supported by MaaS and data intelligence based on functional analysis and programming working on spare data.

Still, minimize any approach based on “ab-inizio” technology and software development. The Big Data ecosystem provides excellent platforms and MaaS agile approach helps to shift later the final technology choice/selection. Further, MaaS agile practice assists to clarify successes and failures zone and set expectations by time. This happens why when services have been set by on-premise topology then a link has been stretched among the data heritage and the data intelligence. There are no constraints between the raw data (documented or not) and the user experience that will leverage functional and business alignment. In the middle, only the data-lake exists, continuously changing and growing, continuously supplying information for the data intelligence ending.

The Many-Particle approach
Today, more of 70 percent of the world’s information is unstructured, not classified and, above all, misused: we are assisting to the greatest Marketing and Sales data myopia since they exist. Still, there is no awareness of the Big Data benefits for service and/or product companies, and again how the product’s companies can change their services based on goods production: great amount of data, exceptionally growing, high entropy, unknown correlations and limited data usage. The concept of on-premise topology introduces services as data-driven aggregation states applied to given parts of the data-lake. But this is what happens to many-particle system instability (yottabyte is 1024 byte with a binary usage of 280). Big data storages dimension near data-lake to many-particle systems. This vision destroys any traditional approach to Marketing and Sales.

If we consider the big data-lake, it contains fast moving content in order of data affinity and mass correlation. Depending upon dynamic data aggregation, data topologies may change by tuning on-premise data mapping. Consider data-lakes are mainly fed through:

– ingestion, distillation and landing from content based (datasources, datasets, operational and transactional DB’s);
– ingestion and distillation from collaborative feeding (dynamic collections of large amount of information on users’ behaviours coming from the internet, direct and/or indirect).

Collaborative ingestion can be managed as a content based as well in case of time needed to data intelligence ending has no strict constraints so to define a third method, the hybrid one.

This brief introduction tries to explain that the data-lake maps ab-initio topologies to services but also may classify more ecosystems the services are defined and applied to. Services live in the ecosystems and ecosystems depend upon data aggregation (why used, where used, how used, who uses) and just like aggregation states, big data density change dynamically. These changes are a consequence of datasources ingested, users experiences, customers behaviours, ecosystems interaction and, of course, business realignment. Marketing and Sales should change accordingly. But since data-lake may grow by 40 percent per year (in line with the estimation of the worldwide rate of information growth taking into account that unstructured data is growing 15 times faster than structured data – source IBM®), there is no way to get any (predictive) control for marketing and sales organization although data warehousing and/or sophisticated traditional data mining and analysis are in place.

Anyway, the data growth will be greater than ever in the next years and so the variance for data aggregation in the data-lake will have an exponential rising: this means many opportunities could be lost and again further marketing and sales entropy. Ab-initio topology by agile big data approach and functional programming applied to the data-lake supply the best answer for prescriptive analysis on many-particle big data systems. In fact, the data-lake allows to work on data cross-aggregation optimization, customer experience and aggregation states for services realignment with respect to the business ecosystems. Still, data-lake is an extraordinary real-time “what-if set” for prescriptive scenarios, data processing assumption and data risk propensity.

Data-Sea

Fig.2 – The Data-Lake is quickly becoming a Data-Sea with multi-particle-like data behaviour and dimension

Banking and Goods Production are 2 typical examples of Big Data agile implementation. Both are supplying services. Both are trying to align instantly and proactively offer and business changes. Banking and Financial services play a strategic role in relationship management, profitability performance to corporate groups, client companies and commercial banking networks. This is why financial applications need to be rapidly synchronized to ecosystems fluctuations states as ecosystem participants’ change everywhere their behaviour due to local and international business conditions. Functional big data paradigm working on many-particle data aggregation is prescriptive with respect to unpredictable services transition: it agilely realigns ecosystem services directions over on-premise data topologies mapping.

Goods production may tune services as a consequence of user’s experience by, for example, executing more focused and less time-consuming recommender systems. Goods production companies are in the run to provide personalized technical and commercial services, greater client loyalty and prescriptive offers starting soon when the clients interact or navigate the company website. With agile big data and many-particle approach, goods production potentially increases user similarity by data-lake massive data aggregations. Fast moving data aggregations constantly feed functional data intelligence to services realignment and topological correlations repositioning on-premise data similarities.

Two different paces, the same objective: be prescriptive, understanding “at earlier” which data aggregation state is the most proper along the data-lake instability and then contiguously realign products offer, services configuration and, consequently, keep ecosystems oversee: on-premise topology gauged on data-lake volume, data velocity and variety allows Marketing and Sales to tune on effective data aggregation to promptly adjust services to the ecosystem.

Conclusion
Client sentiment and user experience behaviour analytics allow rapid changes to product offerings or customer support which in turn enhance customer fidelity and business improvement. However data are growing exponentially and business alignment have to be provided in more decentralized environments. Agile MaaS approach based on data-driven raw volume, data velocity and variety together with on-premise services topology is a relatively low cost and light model. Topology does not influence data treatment. Data remains intact although services integrity and classification drive business, user experience and ecosystems alignment. Accordingly, agile practice and many particle approach we introduced minimize data entropy together with data access time cycles everywhere, preserve data security and enhance user experience to functional visualization realignment.

Acknowledgment
I sincerely thank Paolo La Torre for his precious feedback on contents and encouragement on publishing this paper. Paolo is working as Commercial, Technical and Compliance Project Supervisor for Big Data planning and engagement directions in finance and banking.

References
N. Piscopo, M. Cesino – Gain a strategic control point to your competitive advantage – https://www.youtube.com/watch?v=wSPKQJjIUwI
N. Piscopo – ID Consent: applying the IDaaS Maturity Framework to design and deploy interactive BYOID (Bring-Your-Own-ID) with Use Case
N. Piscopo – A high-level IDaaS metric: if and when moving ID in the Cloud
N. Piscopo – IDaaS – Verifying the ID ecosystem operational posture
N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data
N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
N. Piscopo – Applying MaaS to DaaS (Database as a Service ) Contracts. An introduction to the Practice
N. Piscopo – MaaS applied to Healthcare – Use Case Practice
N. Piscopo – ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
N. Piscopo – CA ERwin® Data Modeler’s Role in the Relational Cloud
N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle

Disclaimer – This document is provided AS-IS for your informational purposes only. In no event the contains of “Agile Big Data and Many-Particle approach change Marketing and Sales effectiveness ” will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this documentation, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document use or performance. All trademarks, trade names, service marks, figures and logos referenced herein belong to their respective companies/offices.

Advertisements

A high-level IDaaS metric: if and when moving ID in the Cloud

Introduction

Building metrics to decide how and whether moving to IDaaS means considering what variables and strategy have to be taken into account when organizations subscribe identity as a service contracts. Before moving any IdM to the Cloud, organization should balance costs and risks. Accordingly, metrics adopted should be enough flexible to be applied from both a company that is developing an IdM system and a company that already has a IAM in operation but is considering to move the ID to the Cloud. The metric introduced below is included into a coming IDaaS Best Practices helping companies to understand, evaluate and then decide if and how moving ID to the Cloud.

IDaaS: Measure Maturity

IDaaS metric definition starts from on-premise IdM/IAM acquisition and implementation costs. Take into consideration the following parameters:
1)  COSTS – IdM/IAM costs are mainly based upon Infrastructure, Personnel, Administration (Access, Help desk, Education/Courses, ..), Attestation and Compliance (including personnel certification and upgrading), Business Agility expenditures;
2) RISKS – Risks are based upon expenditures to cover by order:
2.1 Implementation risks (the risk that a proposed investment in technology may diverge from the original or expected requirements);
2.2  Impact risks (the risk that the business or technology needs of the organization may not be met by the investment in the IAM solution, resulting in lower overall total benefits);
2.3 System protection (perimeter defence, audit and surveillance).

The risk/confidence the company is dealing with depends mainly upon the combination of:
– IAM maturity, in terms of implementation, maintenance and evolution capacity;
– SOA maturity, to really understand policies by applied processes (privileges by user role, accreditations, de-accreditations, …) and dynamically acting into the organization;
– Adherence to the criteria that measure service provider(s) compliance with the identity ecosystem framework.

IDaaS Maturity2

Figure 1 – IDaaS Maturity Framework to IDaaS Best Practices

Accordingly, the metric should be based upon the organization maturity grade. The gauge proposed is made the simplest possible, designed to be flexible: if necessary, this metric can be enriched and applied to more complex systems (more parameters by maturity levels, more maturity levels according to the company’s policy). The metric measures what is the confidence/risk when organizations moves to IDaaS by adopting the following models:

1)    ID On-premise – ID is outsourced but infrastructure is kept inside the company. In this case ID personnel manage tools and infrastructure but expertise is coming from the outsourcer;
2)    ID Provider Hosted – A private Cloud for IDaaS is managed. Personnel managing the private Cloud (tools) are shared with the service Provider. In this case administration, tools and infrastructure are in the private Cloud and ID management is shared;

Flux IDaaS Schema2

Figure 2 – IDaaS properties and possible path to the Cloud

3)    ID Hybrid – IDaaS is in the Cloud although sensitive information is yet managed internally. ID Hybrid means subscribing private, community and/or public Cloud services. Tools and infrastructure are shared through the Cloud. ID administration is managed in the Cloud.
4)    ID in the Cloud – The ID is in the Cloud. Only personnel managing contract and service conditions (all aspects: policy, framework, SLA …) are kept internally.

These aspects are important on one hand considering what risk (and countermeasures) may be taken when moving the ID to the Cloud and on the other hand which takings could be expected in terms of cost savings. Companies have to balance the real business value of the risks based upon on-premise ID maturity and the eventual cost reduction, model by model. In the following picture, an example shows how 3 companies having 3 different levels of maturity for IdM, SOA and Ecosystem adherence, meet 3 scenarios in term of Cost/Saving and Confidence/Risk when decide to move to IDaaS.

Cost-Risk graph2

Figure 3 – IDaaS: 3 cases of companies having different level of maturity and risk

Company A – Company A manages advanced projects to implement and maintain high levels of maturity for IdM and SOA. Still, attention is paid to the Cloud identity ecosystem: the Company applies specific criteria to assess services provisioning in the Cloud. By applying IDaaS Best Practices based on Maturity levels, Company A might moderate the risks if decides to move ID in the Cloud. Criteria to adopt Cloud services are enough stable to manage on-demand and full provisioning IDaaS. Cost saving is another aspect should be taken into consideration. By externalizing IDaaS, the expected savings might be impressive (about 70% of CapEx invested) and, in this case, moving to the Cloud can be balanced with a path that further moderates the risk.

Company B – Company B has an intermediate maturity and work in progress projects through the IdM and SOA implementation. The ecosystem interface knowledge also is increasing although it is not yet disciplined. Confidence to move ID to the Cloud is low with respect the Company A and the risk is growing with the above IDaaS models. Considering the CapEx to implement internal IAM and BPM procedures, IDaaS cost saving is lower (about 30% of CapEx invested) then Company A. Company B should mitigate the risk by moving to the appropriate IDaaS model. The right path to subscribe IDaaS should be starting from the most proper IDaaS model to progressively increase levels of maturity.

Company C – Company C has a different challenge to get, with respect Company A and B. Company C is not organized to set defined levels of maturity for IdM and SOA. Still, there is not enough interest or experience to classify proper requirements and accountability mechanisms typical of an identity Cloud ecosystem structure. Identity and SOA cultures exist but they are jeopardized. In this case without CapEx to cover, it seems highly attractive saving soon by moving to IDaaS. However, cost saving only is not the best way, generally speaking, to move to the Cloud, neither to subscribe IDaaS contracts. The risk to move ID in the Cloud is really high. The Company C should ask for:

–      how IDs are provisioned, authenticated and managed (IdM, IAM);
–      who retains control over ID policies and assets (SOA);
–      how are stringent peer to peer security standards (ID ecosystem);
–      how and where are employed data encryption and tokenization (ID ecosystem);
–      how and where are employed federated identity policies (for example: check if they are regularly backed by strong and protected authentication practices) (SOA);
–      what about availability, identity data protection and trust on third parties (ID ecosystem);
–      how is employed transparency into cloud operations to ensure multi-tenancy and data isolation (IdM and ID ecosystem).

Could Company C provide the above answers before movingthe ID to the Cloud? This essential information should be an asset for any company that decide to migrate to the Cloud. Prerequisites above are only a part of the full requirements subscribers should assert before acquiring Cloud ID services. No Company can improvise to move to IDaaS: consequently, possible choices for Company C may be the following:
1) starting from the low risk ID on-premise model;
2) moving in any case ID to the Cloud being aware of the risk by trying to balance IDaaS cost saving (OpEx) benefit and Cloud environments introducing transient chains of custody for sensitive enterprise data and applications.

Defining the Metric
The metric that should best describe the above scenarios is based on the products of exponential functions depending upon parameters setting the organization maturity levels. In practice, the general mathematical relationship is the following:

Risk Formula2

Here is the meaning of variables and indexes:
R is the Risk/Confidence value defining the range maturity forward the IDaaS model above described;
Pcis the percentage of completion of each maturity range;
V is the variable corresponding to the magnitudes chosen to measure the maturity of the specified range. To calculate the level of IDM, SOA and Ecosystem maturity, 2 variables have been chosen: the project cost (Cm is the current cost and CM the estimated budget cost) and the project time completion (Tm is the current project time and TM the estimated project completion time);
N is the number of maturity ranges considered (IdM, SOA, Ecosystem …).
Constraints: the exponential function is a pragmatic risk estimation based upon the concept of density of probability. To compute the risk/confidence there is no average technique included: the max of the series of the calculated risks has been preferred with respect to the statistical averages models. Looking at the above metric, it requires the following constraint: 3 maturity ranges should be at least considered to estimate the best IDaaS model. They are: IdM, SOA and Ecosystem Framework. Further, the above metric is extensible and it is enough flexible to consider more ranges of maturity and, inside each one, more variables to be added to projects costs and times. Finally, R (risk/confidence) is computed as the max value among maturity series’ risks. In practice, consider the following test rates:

IdM Maturity: Percent of completion 30%, Cm = 25.000,00 $, CM = 75.000,00 $, Tm = 6 months and TM = 24 months
SOA Maturity: Percent of completion 40%, Cm = 55.000,00 $, CM = 90.000,00 $, Tm = 8 months and TM = 24 months
Ecosystem Framework Maturity: Percent of completion 15%, Cm = 10.000,00 $, CM = 30.000,00 $, Tm = 2 months and TM = 6 months

Risk/confidence outcomes based upon the above values are the following and the max value is:

Risk Formula Outcome2

Could the company accept the risk of 98% in moving to the Cloud with the ID system? What is the main pain looking at the maturity ranges and the risk rates? What is the appropriate IDaaS model could moderate the risk and reduce the costs? The solution in the figure below might be a measured solution to get confidence and awareness before subscribing an IDaaS contract.

Ballot Cost-Risk graph2

Figure 4 – Snapshot based upon the above maturity rates and risk/confidence values

Conclusion

Companies could apply a systematic approach by adopting the gauge above exploited. The metric can help in deciding whether balancing risks and OpEx advantages is appropriate in subscribing an IDaaS contract forward security and business benefits.  Looking at the cost saving for Company C, the above cutbacks could be modest (about 20% or less with respect the actual CapEx) although the ROI would be faster. It depends upon the IDaaS strategy the Company decides to implement.

References

[1] N. Piscopo – Applying MaaS to DaaS (Database as a Service) Contracts. An introduction to the Practice http://cloudbestpractices.net/profiles/blogs/applying-maas-to-daas-database-as-a-service-contracts-an
[2] N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
[3] N. McEvoy – IDaaS Identity-as-a-Service best practices http://CanadaCloud.biz
[4] E. Baize et al. – Identity & Data Protection in the Cloud
[5] F. Villavicencio – Advantages of a Hybrid Co-Sourced IDaaS Model
[6] Identity in the Cloud Outsourcing Profile Version 1.0 – OASIS Committee Note Draft 01 /
Public Review Draft 01
[7] N. Piscopo, N. McEvoyIDaaS – Introduction to the Identity in the Cloud
[8] WG-CloudIDSec IDaaS (Identity as a Service) www.cloud-identiy.info

Disclaimer – This document is provided AS-IS for your informational purposes only. In no event the contains of “A high-level IDaaS metric: if and when moving ID in the Cloud” will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this documentation, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document use or performance. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies/offices.

The Evolution of Single Sign-on

Replacing mainframes with 21st century identity

By Paul Madsen, senior technical architect

The concept of single sign-on (SSO) is not a new one, and over the years it has successfully bridged the gap between security and productivity for organizations all over the globe.

Allowing users to authenticate once to gain access to enterprise applications improves access security and user productivity by reducing the need for passwords.

In the days of mainframes, SSO was used to help maintain productivity and security from inside the protection of firewalls. As organizations moved to custom-built authentication systems in the 1990’s, it became recognized as enterprise SSO (ESSO) and later evolved into browser-based plugin or web-proxy methods known as web access management (WAM). IT’s focus was on integrating applications exclusively within the network perimeter.

However, as enterprises shifted toward cloud-based services at the turn of the century and software-as-a-service (SaaS) applications became more prevalent, the domain-based SSO mechanisms began breaking. This shift created a new need for a secure connection to multiple applications outside of the enterprise perimeter and transformed the perception on SSO.

ping-cloud1Large-scale Internet providers like Facebook and Google also created a need for consumer-facing SSO, which did not previously exist.

Prior to these social networks, SSO was used only within the enterprise and new technology was created to meet the demands of businesses as well as securely authenticate billions of Internet users.

There are many SSO options available today that fit all types of use cases for the enterprise, business and consumer, and they have been divided into three tiers—Tier 1 SSO being the strongest and most advanced of the trio. Tier 1 SSO offers maximum security when moving to the cloud, the highest convenience to all parties, the highest reliability as browser and web applications go through revisions and generally have the lowest total cost of ownership. Tier 2 SSO is the mid-level offering meant for enterprises with a cloud second strategy. Tier 3 SSO offers the least amount of security and is generally used by small businesses moving to the cloud outside of high-security environments.

The defining aspect of Tier 1 SSO is that authentication is driven by standards-based token exchange while the user directories remain in place within the centrally administered domain as opposed to synchronized externally. Standards such as SAML (Security Assertion Markup Language), OpenID Connect and OAuth have allowed for this new class of SSO to emerge for the cloud generation. Standards are important because they provide a framework that promotes consistent authentication of identity by government agencies to ensure security.

These standards have become such a staple in the authentication industry that government agencies like the United States Federal CIO Council, NIST (National Institute of Standards and Technology) and Industry Canada have created programs to ensure these standards are viable, robust, reliable, sustainable and interoperable as documented.

The Federal CIO Council has created the Identity, Credential, and Access Management (ICAM) committee to define a process where the government profiles identity management standards to incorporate the government’s security and privacy requirements, to ensure secure and reliable processes.

The committee created the Federal Identity, Credential, and Access Management (FICAM) roadmap to provide agencies with architecture and implementation guidance that addresses security problems, concerns and best practices. Industry Canada’s Authentication Principles Working Group created the Principles for Electronic Authentication which was designed to function as benchmarks for the development, provision and use of authentication services in Canada.

As enterprises continue to adopt cloud-based technologies outside of their network perimeter, the need for reliable SSO solutions becomes more vital. Vendors that support these government-issued guidelines offer strongest and most secure access management available today. Since the establishment of SSO, the technological capabilities have greatly advanced and SSO has been forced to evolve over the past few decades. First generation SSO solutions were not faced with Internet scale or exterior network access, whereas today’s SSO is up against many more obstacles.

As IT technology progresses in the future, SSO will have to grow with it and strengthen its security. For instance, while SSO is the expectation for web browser applications, the emergence of native applications (downloaded and installed onto mobile devices) has hilted the necessity of a similar SSO experience for this class of applications. To address these new use cases, new standards (or profiles of existing standards) are emerging and initiatives like the Principles for Electronic Authentication will have to adapt accordingly in order to offer the best guidance possible.

Drastic Measures Not Needed with DRaaS

mike-gault“We are seeing 100-year hurricane cycles arrive every two years.”

Mike Gault

Perhaps the only thing worse than a disaster happening is seeing it coming and knowing nothing can be done to stop it. Businesses along the northeastern seaboard had several days of warning before Hurricane Sandy struck, certainly not enough time to implement a disaster recovery plan from scratch. Even more painful is the understanding that some disaster recovery plans would not be enough; physical backup systems in separate geographical areas may have still suffered the same losses as the home site due to the size of the storm.

Most disasters come with no warning at all. Explosions, power outages, and simple equipment failure can cause the same damage. Operations are down, customers suffer, and revenues tank. Once business recovers the harder work of wooing back customers and convincing new ones about the company’s reliability begins.

Simply doubling up infrastructure and creating physical backups is expensive and time-consuming leading to systems that function inadequately when put into use. Cost cutting means doing without applications and information essential to performance. Lack of testing and differences in tools lead to inefficient work practices during the recovery.

Move into the Cloud

Cloud computing and virtual services eliminate a majority of these concerns. Disaster Recovery as a Service, or DRaaS, is a resource-efficient method of allowing business to continue with little to no interruption. Because everything resides in the cloud, no duplicate infrastructure is needed, testing and upgrades are assured, and no applications or information need be out of commission.

DRaaS is a natural extension of the cloud computing phenomenon. Service providers have hardened their security and created tiered services that fit any budget. Companies are embracing cloud computing for a variety of purposes. The flexibility of such services is a huge driver to adoption since only the services needed are active. The rest can be brought online as desired or shut down during idle time.

IT overhead and infrastructure reductions create cash to fuel growth. Cloud services are the perfect vehicle for the rapidly expanding mobile worker and consumer groups. By taking the time upfront to plan and consider operational requirements, disaster recovery can be the key to successful business recovery.

Service Level Agreement Considerations

The Service Level Agreement (SLA) spells out exactly what will and will not be provided with any cloud service. It is crucial to understand the SLA governing disaster recovery, because a disaster is not the time to discover shortcomings in coverage. Performance and productivity need not suffer if due diligence is taken to make a realistic determination of business continuity needs. Planning wisely also keeps SLA costs to a minimum.

Consider these questions:

  • What applications must be included?
  • What operations are essential for service?
  • What information must be easy to access during this time?
  • How often are testing and upgrades performed?
  • What guarantee of data integrity is offered?

A good service provider will have the experience to help answer these and other questions. They should have an excellent understanding of the extent of disaster recovery needed in a variety of industries. Some providers may even specialize in certain verticals, deepening their ability to determine needs and provide suggestions.

DRaaS Benefits Tower Over Risk

If nothing else, Hurricane Sandy brought home the absolute worst that could happen. Fire, flood, and power failures on such a massive scale are unprecedented but not impossible. Disaster recovery is an essential part of business continuity that must not be put off.  The cost of loss far outweighs the cost of DRaaS because, even if such events are rare, all it takes is once. New York Governor Andrew Cuomo said we are seeing 100-year hurricane cycles arrive every two years.

With the knowledge that DRaaS, like all cloud services, is a cost-effective way to relieve the worry of business interruptions, large or small, business owners can put a line through this item on the to-do list. With guarantees of integrity and continuity, resources and energy can be channeled into growing the business and keeping customers happy.

# # #

Mike Gault is CEO of Guardtime, a developer of digital signatures that algorithmically prove the time, origin and integrity of electronic data.  He started his career conducting research in Japan on the computer simulation of quantum effect transistors. He then spent 10 years doing quantitative financial modeling and trading financial derivatives at Credit Suisse and Barclays Capital. Mike received a Ph.D. in Electronic Engineering from the University of Wales and an MBA from the Kellogg-HKUST Executive MBA Program in Hong Kong. You can reach him at Mike.Gault@guardtime.com or visit www.guardtime.com.

Data is the new perimeter for cloud security

By Mike Gault, Ph.D.

The cyber security market in 2012 is estimated at $60 billion, yet adding more and more layers of perimeter security may lead to a false sense of security and be completely useless against a determined system administrator working on the inside. The end result is that your data might be secure or it might not – you simply have no way to prove it.

Shawn Henry, FBI veteran of 24 years and now president of CrowdStrike Services had this to say about integrity at the Black Hat conference this year: “These days, you can’t just protect the information from being viewed. You also need to protect it from being changed or modified.”

This leads to the question: Would you know if an attacker or your own system administrator got to your data?

Traditionally, the ‘integrity’ component of the CIA triad of data security [confidentiality, integrity, availability] has focused on protecting the integrity of data. But proving the integrity of data – knowing you have not been compromised – is equally if not more important.

We have been nibbling around the edges of this with checksums and other one-way hash algorithms but have yet to create truly scalable, rock-solid mechanisms to prove integrity.

It’s as though we have taken a car that holds our most precious cargo (our children) and wrapped it with increasing layers of protection but we fail to create a way to monitor the brakes or onboard computers for tampering or other untoward acts.

Data is the new perimeter

Many experts have come to the conclusion that all networks will eventually be compromised, so security should be focused on protecting data and less about the perimeter – i.e., what is required is a data-centric focus on security.

What is needed is an infrastructure that’s designed to deliver digital signatures for data at scale, ensuring that verification of the signatures does not require trusting any single party.

Donald Rumsfeld famously compared the difference between known unknowns and unknown unknowns. Digital signatures that are essentially ‘keyless’ have the power to convert one unknown — “Is my security working?” – to a known: “I have proof that my applications and data have not been compromised and that proof is independent from the people operating those systems.”

So what is a keyless signature? In a nutshell, a keyless signature is a software-generated tag for electronic data that provides proof of signing time, entity, and data integrity. Once the electronic data is tagged, it means that wherever that data goes, anyone can validate when and where that data was tagged and that not a single bit has changed since that point in time. The tag, or signature, never expires and verification relies only on mathematics – no keys, secrets, certificates, or trusted third parties – just math.

And we can all trust math.

About the Author
Mike Gault is CEO of Guardtime, a developer of digital signatures that algorithmically prove the time, origin and integrity of electronic data. He started his career conducting research in Japan on the computer simulation of quantum effect transistors. He then spent 10 years doing quantitative financial modeling and trading financial derivatives at Credit Suisse and Barclays Capital. Mike received a Ph.D. in Electronic Engineering from the University of Wales and an MBA from the Kellogg-HKUST Executive MBA Program in Hong Kong. You can reach him at Mike.Gault@guardtime.com or visit http://www.guardtime.com.