Agile Big Data and Many-Particle approach change Marketing and Sales effectiveness

Big data projects have broad impact on organizations. Big Data implementation overtakes
what normally Many-Particle data aggregation goodcould be considered a new way to conduct data management to business alignment. With Big Data the path from data sources to data intelligence changes drastically. The way to design and implement data intelligence definitively changed access, ingest, distil, processes and data visualization as well. Big data projects meet agile implementation, shorten the data intelligence lifecycle by increasing services capability and adequacy to fast-growing datasets, fast moving business. Accordingly, agile practice and many-particle approach minimize data entropy together with data access time cycles everywhere, preserve data security and enhance user experience to business instant realignment.

Contents
Introduction
Data Topology and Agile Big Data
The Many-Particle approach
Conclusion
Acknowledgment
References

Introduction
The way to move from today business data into Big Data intelligence could be a costly and time consuming process that could decrease the tremendous advantage of the Big Data and Cloud paradigms. Today, information is still misaligned with the business although the huge efforts of the past business intelligence projects: companies still use partial quantities of the real corporate data heritage. As a consequence, the data spectrum exploited is unpredictable and the process to align data and business is a long-term process. Agile Big Data aligns instantly data heritage and business data. Continuous data ingestion and distillation drastically reduces ETL process to run intelligence on the “big data-lake” when needed. Then, on-premise big data topology and functional data intelligence have a crucial role to meet profitability, customer affinity and fast moving business goals. This paper introduces the business case for Big Data to avoid Marketing and Sales data entropy, reduce risks and increase the likelihood of an aware and successful Big Data implementation.

Data Topology and Agile Big Data
Documenting data evolution and updating in the past could be considered a good practice in managing data. In the beginning of cloud paradigm, due to the cost cut down attraction, the practice to have a map of the company data heritage became a great benefit especially when services have to be subscribed in the cloud. Data models, a way to document the data heritage, evolved into MaaS (Model as a Service) that supports agile design and deliver of data services in the Cloud and makes the difference in planning a Big Data implementation project.

Considering data models doesn’t mean structured data only. On-premise models map data coming from structured, semi-structured and non-structured sources. Data models maps defined services topology would be moved on-premise or in the cloud. Still data models is needed for early exploration analysis and “ab-initio” services classifying parameters which define services boundaries (to personal cloud, financial parameters or healthcare positions, for example); data models (on SQL, No-SQL, Vectors or Graph structures) essentially doesn’t address the meaning the data have but identify the services’ classes before creating the data-lake. Of course, into the data-lake converge unusable data, unstructured or denormalized raw datasources as well. The more aware is the on-premise topology, the more secure and localizable is the big data usage both on-premise and in the Cloud. Further, agile MaaS approach reveals business process affected, operating requirements and stakeholders.

Big Data CorporateFig. 1 – Corporate Data-Lake and Agile Big Data approach

Accordingly, agile Big Data practice sets the link among on-premise data topologies and on-premise or in the cloud data intelligence. Topology leverages the company services asset into specific business objectives and will determine the successful user experience requirements and the proper rapid alignment with respect to the competitors.

This means that two crucial aspects have to be taken care:

  • Data is the “compass” to understand services capacity, stakeholders, culture of the organization: big data agility is based on data-driven approach. Therefore, in the incoming project setup minimize functional data behaviour. Use MaaS topology to define projects use cases data-driven. Data-driven project design defines data ingestion architecture and data landing into the data-lake and assist in understanding the best policy for continuous data feeding. Do not disregard this aspect: accurate data feeding is the core of Big Data approaches;
  • Move data analysis and functional aggregation to data intelligence applied on the data-lake. During ingestion and data landing data treatments have to be minimized Agile Big Data approach considers 2 zones: the in-memory one, based on data topology and on-premise supported by MaaS and data intelligence based on functional analysis and programming working on spare data.

Still, minimize any approach based on “ab-inizio” technology and software development. The Big Data ecosystem provides excellent platforms and MaaS agile approach helps to shift later the final technology choice/selection. Further, MaaS agile practice assists to clarify successes and failures zone and set expectations by time. This happens why when services have been set by on-premise topology then a link has been stretched among the data heritage and the data intelligence. There are no constraints between the raw data (documented or not) and the user experience that will leverage functional and business alignment. In the middle, only the data-lake exists, continuously changing and growing, continuously supplying information for the data intelligence ending.

The Many-Particle approach
Today, more of 70 percent of the world’s information is unstructured, not classified and, above all, misused: we are assisting to the greatest Marketing and Sales data myopia since they exist. Still, there is no awareness of the Big Data benefits for service and/or product companies, and again how the product’s companies can change their services based on goods production: great amount of data, exceptionally growing, high entropy, unknown correlations and limited data usage. The concept of on-premise topology introduces services as data-driven aggregation states applied to given parts of the data-lake. But this is what happens to many-particle system instability (yottabyte is 1024 byte with a binary usage of 280). Big data storages dimension near data-lake to many-particle systems. This vision destroys any traditional approach to Marketing and Sales.

If we consider the big data-lake, it contains fast moving content in order of data affinity and mass correlation. Depending upon dynamic data aggregation, data topologies may change by tuning on-premise data mapping. Consider data-lakes are mainly fed through:

– ingestion, distillation and landing from content based (datasources, datasets, operational and transactional DB’s);
– ingestion and distillation from collaborative feeding (dynamic collections of large amount of information on users’ behaviours coming from the internet, direct and/or indirect).

Collaborative ingestion can be managed as a content based as well in case of time needed to data intelligence ending has no strict constraints so to define a third method, the hybrid one.

This brief introduction tries to explain that the data-lake maps ab-initio topologies to services but also may classify more ecosystems the services are defined and applied to. Services live in the ecosystems and ecosystems depend upon data aggregation (why used, where used, how used, who uses) and just like aggregation states, big data density change dynamically. These changes are a consequence of datasources ingested, users experiences, customers behaviours, ecosystems interaction and, of course, business realignment. Marketing and Sales should change accordingly. But since data-lake may grow by 40 percent per year (in line with the estimation of the worldwide rate of information growth taking into account that unstructured data is growing 15 times faster than structured data – source IBM®), there is no way to get any (predictive) control for marketing and sales organization although data warehousing and/or sophisticated traditional data mining and analysis are in place.

Anyway, the data growth will be greater than ever in the next years and so the variance for data aggregation in the data-lake will have an exponential rising: this means many opportunities could be lost and again further marketing and sales entropy. Ab-initio topology by agile big data approach and functional programming applied to the data-lake supply the best answer for prescriptive analysis on many-particle big data systems. In fact, the data-lake allows to work on data cross-aggregation optimization, customer experience and aggregation states for services realignment with respect to the business ecosystems. Still, data-lake is an extraordinary real-time “what-if set” for prescriptive scenarios, data processing assumption and data risk propensity.

Data-Sea

Fig.2 – The Data-Lake is quickly becoming a Data-Sea with multi-particle-like data behaviour and dimension

Banking and Goods Production are 2 typical examples of Big Data agile implementation. Both are supplying services. Both are trying to align instantly and proactively offer and business changes. Banking and Financial services play a strategic role in relationship management, profitability performance to corporate groups, client companies and commercial banking networks. This is why financial applications need to be rapidly synchronized to ecosystems fluctuations states as ecosystem participants’ change everywhere their behaviour due to local and international business conditions. Functional big data paradigm working on many-particle data aggregation is prescriptive with respect to unpredictable services transition: it agilely realigns ecosystem services directions over on-premise data topologies mapping.

Goods production may tune services as a consequence of user’s experience by, for example, executing more focused and less time-consuming recommender systems. Goods production companies are in the run to provide personalized technical and commercial services, greater client loyalty and prescriptive offers starting soon when the clients interact or navigate the company website. With agile big data and many-particle approach, goods production potentially increases user similarity by data-lake massive data aggregations. Fast moving data aggregations constantly feed functional data intelligence to services realignment and topological correlations repositioning on-premise data similarities.

Two different paces, the same objective: be prescriptive, understanding “at earlier” which data aggregation state is the most proper along the data-lake instability and then contiguously realign products offer, services configuration and, consequently, keep ecosystems oversee: on-premise topology gauged on data-lake volume, data velocity and variety allows Marketing and Sales to tune on effective data aggregation to promptly adjust services to the ecosystem.

Conclusion
Client sentiment and user experience behaviour analytics allow rapid changes to product offerings or customer support which in turn enhance customer fidelity and business improvement. However data are growing exponentially and business alignment have to be provided in more decentralized environments. Agile MaaS approach based on data-driven raw volume, data velocity and variety together with on-premise services topology is a relatively low cost and light model. Topology does not influence data treatment. Data remains intact although services integrity and classification drive business, user experience and ecosystems alignment. Accordingly, agile practice and many particle approach we introduced minimize data entropy together with data access time cycles everywhere, preserve data security and enhance user experience to functional visualization realignment.

Acknowledgment
I sincerely thank Paolo La Torre for his precious feedback on contents and encouragement on publishing this paper. Paolo is working as Commercial, Technical and Compliance Project Supervisor for Big Data planning and engagement directions in finance and banking.

References
N. Piscopo, M. Cesino – Gain a strategic control point to your competitive advantage – https://www.youtube.com/watch?v=wSPKQJjIUwI
N. Piscopo – ID Consent: applying the IDaaS Maturity Framework to design and deploy interactive BYOID (Bring-Your-Own-ID) with Use Case
N. Piscopo – A high-level IDaaS metric: if and when moving ID in the Cloud
N. Piscopo – IDaaS – Verifying the ID ecosystem operational posture
N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data
N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
N. Piscopo – Applying MaaS to DaaS (Database as a Service ) Contracts. An introduction to the Practice
N. Piscopo – MaaS applied to Healthcare – Use Case Practice
N. Piscopo – ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
N. Piscopo – CA ERwin® Data Modeler’s Role in the Relational Cloud
N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle

Disclaimer – This document is provided AS-IS for your informational purposes only. In no event the contains of “Agile Big Data and Many-Particle approach change Marketing and Sales effectiveness ” will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this documentation, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document use or performance. All trademarks, trade names, service marks, figures and logos referenced herein belong to their respective companies/offices.

ID Consent: applying the IDaaS Maturity Framework to design and deploy interactive BYOID (Bring-Your-Own-ID) with Use Case

Introductionidentity

Current approaches to IDaaS on one hand enforce trust of consumer data using legal compliance, risk and impact assessment and the other hand require technical implementation of access controls to personal data held by an enterprise. Balancing trust has to be done across all layers, verifying person’s identities, showing the individual and the service is real, creating short term relationships and verifying and maintaining all long the Cloud service the user mapping between the enterprise and the cloud user account in a mesh federation. This makes sense only if enterprises design “on-premise” with MaaS their own flexible ID data model and can verify ID maturity and consistency before moving, and along, the ID service in the Cloud. Based on MaaS, the BYOID concept is a possible solution to ID models for consent policy design, management and deployment. The BYOID model is a means to expressing, tracing and updating consumer’s personal data policy requirements; however enterprise users’ privacy preferences are provided as well. The IDaaS Maturity Framework (IMF) defines and directs the BYOID practice. MaaS guide properties and personal preferences from the consent metamodel design to the ID deployment. Both ensure that ecosystem compliance is achieved and ID in the Cloud meets trustworthy relationships.

IMF supports flexible BYOID design and deployment

IDaaS is authentication and authorization infrastructure that is built, hosted and managed through different models by third-party service providers, resident in ID ecosystem frameworks. IDaaS for the enterprise is typically purchased as a subscription-based managed service.  One or more cloud service providers, depending upon the IDaaS model the enterprise deploys, may host applications and provide subscribers with role-based web access to specific applications or even entire virtualized infrastructure. IDaaS makes enterprises responsible in evaluating privacy risks and grade of confidence when moving the ID to the cloud. Accordingly, before externalizing the corporate IdM, consider the different IDaaS models are supported depending upon the maturity levels of:

– IdM/IAM system, in terms of implementation, maintenance and IdM/IAM governance capacity. ID, by its nature is de-centralized and then the maturity rank should consider the whole IdM/IAM system including data protection, data manageability, data security and organization ID awareness at all levels;

IMF BYOID Fluid LifecycleFig. 1 – An example of enterprise BYOID consent model lifecycle to IDaaS deployment and reconciliation

– SOA system, to really understand policies by applied processes’ de-coupling (privileges by user role, accreditations, de-accreditations …) and procedures dynamically acting into the organization;

– ID ecosystem reliability and adherence to the frameworks’ security criteria that measure service provider(s) compliance.

However, the levels of maturity gauged along the organization enables the enterprise to design its own ID as a consequence of the appropriate IDaaS model. The enterprise is able to bring in the ID ecosystem a configurable IDaaS model based on MaaS design to satisfy enterprise business rules. Business rules have impact on enterprise identity requirements and they balance and reconcile consumer identities needs. This “fluid” multiple-way enterprises-consumers solution, or BYOID, creates a high assurance level of ID ecosystem participants’ identities that could be used for enterprise access by respecting privacy and security requirements: IDaaS models contain BYOID properties and define “on-premise” BYOID maturity and consistency.

A new concept of ID consent: the BYOID fluid model

When registering to an Identity Platform, users would like represent themselves according to their behaviour having the option to approve selective or discretionary sharing of their private information and looking for the ability to obfuscate, mask or mesh some parts of personal data. So, ID platform and user are creating interactively a bond of trust as a part of the whole ID service. This is possible only if the consent of the individuals, the data protection conditions for processing their personal data and consent policies might be modelled “on-premise” by the enterprise IdM.

Looking at the IMF, the ID metamodel might sprout in the IdM/IAM maturity appraisal stage, according to the properties and requirements the enterprise needs to protect personal data and sensible information. The question now is the following: if the ID metamodel is designed in the company IdM, could the consent model be considered proprietary? The metamodel gathers the properties corresponding to the real enterprise requirements but it will be tested and appraised firstly in IdM/IAM system and then in the SOA maturity system. At that point features like interoperability, expression of functionality and user’s behaviour will be explicit aspects of the BYOID data model such as the following:

1)    Trust properties;
2)    Verification;
3)    Scalability and performance;
4)    Security;
5)    Privacy;
6)    Credential Types;
7)    Usability;
8)    Attributes;
9)    User Centricity/User Control.

The above properties are matter for the ID ecosystem public consent data model structure (basic/incoming tables of the BYOID metamodel). In the beginning, those metadata are properties of the company: the company’s BYOID metamodel. Once the BYOID metamodel has been defined, tested and approved as BYOID company data model, it will be released to the ID ecosystem as an IDaaS model subscription. Despite of different approach, each enterprise may then adopt and release his BYOID. Before deploying BYOID services in the Cloud, the BYOID model should be compared with other BYOID models already running into the ID ecosystem frameworks. To be accepted, BYOIDs have to meet a set of common requirements enforced by the consent public ID ecosystem framework authority: the more adaptive is the public consent model (continuously and rigorously improved), the more flexible, secure and reliable are the BYOIDs shared. It makes interactive, fluid and safe BYOIDs deployed through IDaaS. Still, this enables user’s behaviour can be captured both at high level (enterprise-ecosystem reconciliation) and at low level (personal-enterprise-ecosystem reconciliation). Therefore BYOID can be reconciled, renormalized and constantly trusted at all levels. Since BYOID metamodel contains the enterprise identity requirements, it might include and integrate the ID ecosystem identity properties and, if approved by the user (obligation to maintain the personal data securely), his personal properties. This aspect is very important: in fact, there’s significant risk for a company when both customer/user relationships and company data are stored on personal devices. Using BYOID deployed as an IDaaS subscription, company information is centralized based upon “on-premise” consent metamodels: this means that company information stored on personal devices is minimized and always centrally controlled.

BYOID Model Recon

Fig. 2 – Fluid BYOID update and reconciliation: IDaaS User Experience vs. BYOID IDaaS subscription

User’s personal properties might reside on the same company (central) metamodel/consent model or not depending upon user approval and, always possible, withdrawal (i.e. personal data should comply with data protection legislation and, where necessary, the approval of the individual must be obtained). In the figure 2 here is an example. In 1 the User tries a new behaviour (statistically relevant or as a recommender system outcome); in 2 the IDaaS user experience has to be changed and updated. Above we show 3 data models but in the MaaS representation they consist of a unique model containing the BYOID IDaaS subscription (master) that includes 2 sub-models: the company consent model and the user personal model. In 3, the consent model is modified to keep compliance with the company business rules/conduct mapped to the BYOID IDaaS subscription. In 4, finally the update is executed and the User might find his conduct as a new function. However, take note in the figure 2 a relational model-like formalism is applied. This is just a simplification. In effect, we are in a multi-level relational data model that can be represented with NoSQL, Vector or Graph DB else, depending upon the data analytics domain.

USE CASE: the fluid BYOID approach

Scenario

IDaaS models to move ID to the Cloud enable organizations to externalize identities data more knowingly and securely. Employees and customers behaviour changed: they continuously have business contacts, calls and meetings with personal devices. Since an increasing quantity of employees uses their mobile devices everywhere, identities can be resident and so associated to applications running on different framework in a multi-topology cloud configuration. What should be then the best IDaaS model satisfying this new employees/customers conduct? Could be managed all users, across multiple locations, while securing company data? Because of each identity may be managed by different identity management services, authentication and validation of identities by the cloud infrastructure could not be sufficient. Companies have to verify and control “on-premise” their ID maturity. BYOID based upon IDaaS models allows to identifying and securing identity properties. Further IDaaS models assist ID integrity control over shared topologies with a variety of ID ecosystem frameworks. IMF plays a crucial role in identifying the most appropriate IDaaS model before deploying the BYOID to the Cloud. Then the BYOID is an IDaaS model and can be designed “on-premise” and controlled along deployment and subscription.

Properties and Directions

This use case is concerned with enterprises deploying their BYOID in the Cloud using IDaaS models and IMF. There is a need for evaluating “on-premise” organization IdM/IAM and SOA maturity before moving the ID to the Cloud. Evaluating the organization maturity levels involves three steps:

  1. IdM/IAM maturity: measure the IdM/IAM maturity level;
  2. SOA maturity: measure SOA maturity level – policies (privileges by user role, accreditations, de-accreditations …) and processes dynamically acting;
  3. Identity Ecosystem reliability/maturity: measure the ecosystem maturity/reliability, and above all, the secure service continuity because in hybrid topologies identities may be owned by different cloud providers resident in multi-topologies.

Objectives are the following:

  • Enable organization to identify and set the best BYOID through IDaaS model based upon internals levels of IdM/IAM and SOA maturity compared to the ID ecosystem framework’s baseline adherence. This sets maturity in classifying the ID ecosystem framework and in evaluating the reliability the ID ecosystem may provide;
  • Deploy the proper BYOID model applying the correct subscription and adherence with respect to the IDaaS ecosystem;
  • Periodically measure the organization’s IdM/IAM and SOA maturity levels and verify the ID ecosystem reliability/maturity so to update, and eventually scale, the BYOID deployed.

However, accordingly with the objectives, the value of the ID ecosystem level of reliability/maturity is the outcome the company is expecting to:
–          Keep BYOID secure and controlled and supervise the IDaaS service subscription;
–          Contribute to the ecosystem as participant and/or as authority;
–          Be a participant/counterpart in setting and approving attributes providers, policies and relying party’s decisions and IDaaS ecosystem adherence;
–          Contribute to the IDaaS Trustmark definition and to the periodical appraisal and updating.

Tabella1Tabella21Tabella21

Table 1 – BYOID Use Case properties and directions

Process Flow along the IMF

Accordingly to this Use Case, the IMF process flow encompasses three steps:

Part 1: Appraise IdM/IAM Maturity Level – To cover definition, maintenance and upgrade of the organization IdM/IAM level of maturity. The IdM/IAM maturity value has to be periodically monitored and controlled to keep coherence with the IDaaS model deployed:

Use Case 1.1

Figure 3 – BYOID: IDM/IAM Maturity Level Appraisal

The Identity and Access Manager verifies the Maturity level of the IdM/IAM system:

  • The IdM Manager controls and regulates the accesses to information assets by providing policy controls of who can use a specific system based on an individual’s role and the current role’s permissions and restrictions. This ensures that access privileges are granted according to one interpretation of policy and all users and services are properly authenticated, authorized and audited;
  • The BYOID Manager reconciles BYOID metadata and update the BYOID metamodel.

The IAM Manager controls if users’ identities can be extended beyond corporate employees to include vendors, customers, machines, generic administrator accounts and electronic access badges, all ruled by policy controls.

Part 2: Appraise the SOA Maturity Level – To cover definition, maintenance and upgrade of the organization SOA maturity level. The SOA maturity level has to be periodically monitored and controlled to keep coherence with the BYOID released:

Use Case 1.2

Figure 4 – BYOID: SOA maturity level appraisal

The SOA Manager verifies the Maturity level of the SOA system through the SOA interoperability and defines the organization maturity in sharing services among departments:

  • The SOA Manager verifies that the map of communications between services is drawn starting from IdM/IAM system and achieved maturity
  • The SOA Manager controls and reports about the following crucial aspects:
  • SOA reference architecture achievements and evolution;
  • education to broaden SOA culture through the organization;
  • methods and guidelines that organization adopts to apply SOA;
  • policy for SOA appliance and governance.
  • The BYOID Practice Manager tests and executes BYOID consent model reconciliation based on metamodel reconciliation and update. If necessary, BYOID Manager renormalizes the consent model by roundtrip with the BYOID metadata at IdM/IAM maturity level.

Part 3: Appraise the ID Ecosystem Reliability/Maturity – To establish the maturity/ reliability of the ID Ecosystem Posture. The comparative maturity of BYOID (Company vs. ID Ecosystem participants vs. user preferences) has to be continually monitored: points of discontinuity, unmatched policies, and untrusted relationships have to be time by time acknowledged. This helps to better qualifying frameworks accountability, federation assets, and participants’ reliability and level of contribution:

Use Case 1.3

Figure 5 – BYOID: ID Ecosystem Maturity/Reliability Appraisal

The Service Manager verifies the Maturity/ Reliability level of the ID Ecosystem framework:

  • The Service Manager controls that contribution to the ecosystem by privacy aspects, security components and accountability mechanism settings are congruent
  • The Service Manager controls that common guidelines keep coherence with the company policies and standards strategy. Since more than a framework exists inside the ecosystem, rules to ensure that accreditation authorities validate participants’ adherence to the ecosystem requirements are to be verified and updated
  • The Service Manager controls adherence to the ID ecosystem of the IDaaS deployed to verify reliability and service continuity;
  • The Service Manager verifies that accreditation authority to ensure participants and frameworks are adherent to the identity ecosystem interoperability standards accepted
  • The Service Manager controls that the ID ecosystem contains all trusted frameworks that satisfy the baseline standards established and they are compliant with the company maturity level
  • The BYOID Practice Manager verifies the framework ecosystem common levels of adherence (baseline) and test and compare BYOID reliability properties;
  • The ID Ecosystem Management Service verifies BYOID adherence and security with respect the IDaaS subscription.

The ID Ecosystem Management service provides a combination of criteria to determine the service providers’ compliance among frameworks and ID ecosystem topologies: the combination defines policies, rules and, eventually, a Trustmark. It gives confidence to participants in deciding who to trust in terms of BYOID framework adherence and among all ID providers.

Conclusion

Managing digital identities across ID ecosystems frameworks is crucial to improve efficiency of business collaborations. Using everywhere personal devices is becoming a preferred conduct but before sharing the ID among cloud domains, all involved parties need to be trusted. Still, to meet the demanding needs of security, big data analytics and business intelligence, users and consumers need a more efficient and flexible paradigms. In this paper, we identify how BYOID fluid model satisfies on one hand company security and user data protection and, on the other hand, rapid updating and reconciliation to the user conduct. IMF provides the necessary platform for collaboration in ID ecosystem topologies. We introduce also a USE CASE to point out how BYOID built across ID company consent model and ID ecosystem trusted access model, can be a foundation to gauge and govern BYOID strategies. Further, the paper can be used to compare different BYOID IDaaS subscription to establish what maturity levels the company might support compared with all business partners running existing IDaaS maturity models and to ensure ID in the Cloud meets trustworthy relationships.

Acknowledgements

I have to sincerely thank Susan Morrow for the precious feedback on contents and Anil Saldhana for the useful comments on the IDaaS Maturity Framework.

References

N. Piscopo – IDaaS. Verifying the ID ecosystem operational posture
N. Piscopo – A high-level IDaaS metric: if and when moving ID in the Cloud
N. Piscopo – MaaS implements Small Data and enables Personal Clouds
N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data
N. Piscopo – MaaS applied to Healthcare – Use Case Practice
N. Piscopo – Applying MaaS to DaaS (Database as a Service) Contracts. An introduction to the Practice
N. Piscopo – Enabling MaaS Open Data Agile Design and Deployment with CA ERwin®
N. Piscopo – ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
N. Piscopo – CA ERwin® Data Modeler’s Role in the Relational Cloud
N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle
N. Piscopo – Page 16 in Transform2, MaaS and UMA implementation

Disclaimer – This document is provided AS-IS for your informational purposes only. In no event the contains of “ID Consent: applying the IDaaS Maturity Framework to design and deploy interactive BYOID (Bring-Your-Own-ID) with Use Case” will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this documentation, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document use or performance. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies/offices.

A high-level IDaaS metric: if and when moving ID in the Cloud

Introduction

Building metrics to decide how and whether moving to IDaaS means considering what variables and strategy have to be taken into account when organizations subscribe identity as a service contracts. Before moving any IdM to the Cloud, organization should balance costs and risks. Accordingly, metrics adopted should be enough flexible to be applied from both a company that is developing an IdM system and a company that already has a IAM in operation but is considering to move the ID to the Cloud. The metric introduced below is included into a coming IDaaS Best Practices helping companies to understand, evaluate and then decide if and how moving ID to the Cloud.

IDaaS: Measure Maturity

IDaaS metric definition starts from on-premise IdM/IAM acquisition and implementation costs. Take into consideration the following parameters:
1)  COSTS – IdM/IAM costs are mainly based upon Infrastructure, Personnel, Administration (Access, Help desk, Education/Courses, ..), Attestation and Compliance (including personnel certification and upgrading), Business Agility expenditures;
2) RISKS – Risks are based upon expenditures to cover by order:
2.1 Implementation risks (the risk that a proposed investment in technology may diverge from the original or expected requirements);
2.2  Impact risks (the risk that the business or technology needs of the organization may not be met by the investment in the IAM solution, resulting in lower overall total benefits);
2.3 System protection (perimeter defence, audit and surveillance).

The risk/confidence the company is dealing with depends mainly upon the combination of:
– IAM maturity, in terms of implementation, maintenance and evolution capacity;
– SOA maturity, to really understand policies by applied processes (privileges by user role, accreditations, de-accreditations, …) and dynamically acting into the organization;
– Adherence to the criteria that measure service provider(s) compliance with the identity ecosystem framework.

IDaaS Maturity2

Figure 1 – IDaaS Maturity Framework to IDaaS Best Practices

Accordingly, the metric should be based upon the organization maturity grade. The gauge proposed is made the simplest possible, designed to be flexible: if necessary, this metric can be enriched and applied to more complex systems (more parameters by maturity levels, more maturity levels according to the company’s policy). The metric measures what is the confidence/risk when organizations moves to IDaaS by adopting the following models:

1)    ID On-premise – ID is outsourced but infrastructure is kept inside the company. In this case ID personnel manage tools and infrastructure but expertise is coming from the outsourcer;
2)    ID Provider Hosted – A private Cloud for IDaaS is managed. Personnel managing the private Cloud (tools) are shared with the service Provider. In this case administration, tools and infrastructure are in the private Cloud and ID management is shared;

Flux IDaaS Schema2

Figure 2 – IDaaS properties and possible path to the Cloud

3)    ID Hybrid – IDaaS is in the Cloud although sensitive information is yet managed internally. ID Hybrid means subscribing private, community and/or public Cloud services. Tools and infrastructure are shared through the Cloud. ID administration is managed in the Cloud.
4)    ID in the Cloud – The ID is in the Cloud. Only personnel managing contract and service conditions (all aspects: policy, framework, SLA …) are kept internally.

These aspects are important on one hand considering what risk (and countermeasures) may be taken when moving the ID to the Cloud and on the other hand which takings could be expected in terms of cost savings. Companies have to balance the real business value of the risks based upon on-premise ID maturity and the eventual cost reduction, model by model. In the following picture, an example shows how 3 companies having 3 different levels of maturity for IdM, SOA and Ecosystem adherence, meet 3 scenarios in term of Cost/Saving and Confidence/Risk when decide to move to IDaaS.

Cost-Risk graph2

Figure 3 – IDaaS: 3 cases of companies having different level of maturity and risk

Company A – Company A manages advanced projects to implement and maintain high levels of maturity for IdM and SOA. Still, attention is paid to the Cloud identity ecosystem: the Company applies specific criteria to assess services provisioning in the Cloud. By applying IDaaS Best Practices based on Maturity levels, Company A might moderate the risks if decides to move ID in the Cloud. Criteria to adopt Cloud services are enough stable to manage on-demand and full provisioning IDaaS. Cost saving is another aspect should be taken into consideration. By externalizing IDaaS, the expected savings might be impressive (about 70% of CapEx invested) and, in this case, moving to the Cloud can be balanced with a path that further moderates the risk.

Company B – Company B has an intermediate maturity and work in progress projects through the IdM and SOA implementation. The ecosystem interface knowledge also is increasing although it is not yet disciplined. Confidence to move ID to the Cloud is low with respect the Company A and the risk is growing with the above IDaaS models. Considering the CapEx to implement internal IAM and BPM procedures, IDaaS cost saving is lower (about 30% of CapEx invested) then Company A. Company B should mitigate the risk by moving to the appropriate IDaaS model. The right path to subscribe IDaaS should be starting from the most proper IDaaS model to progressively increase levels of maturity.

Company C – Company C has a different challenge to get, with respect Company A and B. Company C is not organized to set defined levels of maturity for IdM and SOA. Still, there is not enough interest or experience to classify proper requirements and accountability mechanisms typical of an identity Cloud ecosystem structure. Identity and SOA cultures exist but they are jeopardized. In this case without CapEx to cover, it seems highly attractive saving soon by moving to IDaaS. However, cost saving only is not the best way, generally speaking, to move to the Cloud, neither to subscribe IDaaS contracts. The risk to move ID in the Cloud is really high. The Company C should ask for:

–      how IDs are provisioned, authenticated and managed (IdM, IAM);
–      who retains control over ID policies and assets (SOA);
–      how are stringent peer to peer security standards (ID ecosystem);
–      how and where are employed data encryption and tokenization (ID ecosystem);
–      how and where are employed federated identity policies (for example: check if they are regularly backed by strong and protected authentication practices) (SOA);
–      what about availability, identity data protection and trust on third parties (ID ecosystem);
–      how is employed transparency into cloud operations to ensure multi-tenancy and data isolation (IdM and ID ecosystem).

Could Company C provide the above answers before movingthe ID to the Cloud? This essential information should be an asset for any company that decide to migrate to the Cloud. Prerequisites above are only a part of the full requirements subscribers should assert before acquiring Cloud ID services. No Company can improvise to move to IDaaS: consequently, possible choices for Company C may be the following:
1) starting from the low risk ID on-premise model;
2) moving in any case ID to the Cloud being aware of the risk by trying to balance IDaaS cost saving (OpEx) benefit and Cloud environments introducing transient chains of custody for sensitive enterprise data and applications.

Defining the Metric
The metric that should best describe the above scenarios is based on the products of exponential functions depending upon parameters setting the organization maturity levels. In practice, the general mathematical relationship is the following:

Risk Formula2

Here is the meaning of variables and indexes:
R is the Risk/Confidence value defining the range maturity forward the IDaaS model above described;
Pcis the percentage of completion of each maturity range;
V is the variable corresponding to the magnitudes chosen to measure the maturity of the specified range. To calculate the level of IDM, SOA and Ecosystem maturity, 2 variables have been chosen: the project cost (Cm is the current cost and CM the estimated budget cost) and the project time completion (Tm is the current project time and TM the estimated project completion time);
N is the number of maturity ranges considered (IdM, SOA, Ecosystem …).
Constraints: the exponential function is a pragmatic risk estimation based upon the concept of density of probability. To compute the risk/confidence there is no average technique included: the max of the series of the calculated risks has been preferred with respect to the statistical averages models. Looking at the above metric, it requires the following constraint: 3 maturity ranges should be at least considered to estimate the best IDaaS model. They are: IdM, SOA and Ecosystem Framework. Further, the above metric is extensible and it is enough flexible to consider more ranges of maturity and, inside each one, more variables to be added to projects costs and times. Finally, R (risk/confidence) is computed as the max value among maturity series’ risks. In practice, consider the following test rates:

IdM Maturity: Percent of completion 30%, Cm = 25.000,00 $, CM = 75.000,00 $, Tm = 6 months and TM = 24 months
SOA Maturity: Percent of completion 40%, Cm = 55.000,00 $, CM = 90.000,00 $, Tm = 8 months and TM = 24 months
Ecosystem Framework Maturity: Percent of completion 15%, Cm = 10.000,00 $, CM = 30.000,00 $, Tm = 2 months and TM = 6 months

Risk/confidence outcomes based upon the above values are the following and the max value is:

Risk Formula Outcome2

Could the company accept the risk of 98% in moving to the Cloud with the ID system? What is the main pain looking at the maturity ranges and the risk rates? What is the appropriate IDaaS model could moderate the risk and reduce the costs? The solution in the figure below might be a measured solution to get confidence and awareness before subscribing an IDaaS contract.

Ballot Cost-Risk graph2

Figure 4 – Snapshot based upon the above maturity rates and risk/confidence values

Conclusion

Companies could apply a systematic approach by adopting the gauge above exploited. The metric can help in deciding whether balancing risks and OpEx advantages is appropriate in subscribing an IDaaS contract forward security and business benefits.  Looking at the cost saving for Company C, the above cutbacks could be modest (about 20% or less with respect the actual CapEx) although the ROI would be faster. It depends upon the IDaaS strategy the Company decides to implement.

References

[1] N. Piscopo – Applying MaaS to DaaS (Database as a Service) Contracts. An introduction to the Practice http://cloudbestpractices.net/profiles/blogs/applying-maas-to-daas-database-as-a-service-contracts-an
[2] N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
[3] N. McEvoy – IDaaS Identity-as-a-Service best practices http://CanadaCloud.biz
[4] E. Baize et al. – Identity & Data Protection in the Cloud
[5] F. Villavicencio – Advantages of a Hybrid Co-Sourced IDaaS Model
[6] Identity in the Cloud Outsourcing Profile Version 1.0 – OASIS Committee Note Draft 01 /
Public Review Draft 01
[7] N. Piscopo, N. McEvoyIDaaS – Introduction to the Identity in the Cloud
[8] WG-CloudIDSec IDaaS (Identity as a Service) www.cloud-identiy.info

Disclaimer – This document is provided AS-IS for your informational purposes only. In no event the contains of “A high-level IDaaS metric: if and when moving ID in the Cloud” will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this documentation, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document use or performance. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies/offices.

Canada launches ‘Roadmap to the Cloud’

roadmap-1Here is the first version of our Canada Cloud Roadmap with inputs from contributing expert authors.

Download : Canada Cloud Roadmap

We will continue to add more great content from a wide range of experts, so that ultimately this document offers a powerful roadmap template for planning an enterprise migration to the Cloud.

The Roadmap is also a framework for a channel partner program, where the solution journeys like BYOD will act to organize go to market sales campaigns.

 Join in – To feature your products and services in this campaign, join in our Roadmap group on Linkedin.

Cloud Readiness Assessment – Planning your Cloud TEI (Total Economic Impact)

cbpn-logo3Our objective here at the Canada Cloud Network is to build a local forum for innovating global Cloud best practices.

For example over here at Sheepdog our goal is to build one of Canada’s premier brands for expert Cloud consulting.

Central to this is development of the ‘Cloud Readiness Assessment’, a standardized process for helping your business understand where and how it could best exploit the trends of Cloud computing.

Total Economic Impact

The framework for planning the Cloud migration business case can be defined in terms that Forrester Consulting calls ‘Total Economic Impact’ (TEI).

Recently Google commissioned Forrester Research to identify the TEI (Total Economic Impact) of moving to Google Apps, polling around 600 mid-sized firms about their collaboration plans.

download_pdf2Download the report here:  The Google Apps TEI Report from Forrester Research, used to to plan the ROI of moving from a legacy messaging and collaboration platform to Google Apps. They describe how organizations have enjoyed a business improvement including:

  • Break even within 1.4 months
  • 329% risk-adjusted ROI
  • A Net Present Value of over $10m following an investment of $400k

roadmap-1Planning your Cloud Roadmap – Cloud Readiness Assessment

A Cloud Readiness Assessment is a consulting engagement that analyzes your organization and its business requirements, and maps these to a Cloud strategy.

An ‘economic impact’ review is a great way to frame this exercise, so you can be clear about how the ROI will b e achieved, and importantly you can test for organizational needs through testing for your ‘readiness’ for Cloud services in different areas:

  • Virtualization and business continuity
  • Desktop and IT operations
  • Unified Communications and staff productivity

The exercise can include some very specific auditing work – For example an assessment of your Microsoft Office licencing situation. In some cases customers find they are paying for versions they aren’t using, in one case yielding $2.5m in annual savings.

If this is then followed by the additional cost savings and productivity benefits that Forrester describe, it’s clear how straight forward it is to plan a successful ROI from a Cloud migration.

The objective of the Canada Cloud Roadmap is to help flesh out the detail of  a number of these journeys. Desktop office software is but one of many scenarios where the same principle and process can be repeated.

With each of these different functional areas having their own self-contained TEI, then it’s possible to cherry pick just one or a combination of them, to assemble the Roadmap that best suits your organization.

MaaS implements Small Data and enables Personal Clouds

Abstract – MaaS (Model as a Service) sets a new concept to order and classify data modeling design and deployment to the Cloud. MaaS changes the way to move data to the Cloud because allows to define data taxonomy, size and contents. Starting from data model design, MaaS might guide the DaaS (Database as a Service) lifecycle, providing data granularity and duty rules: as a consequence, MaaS implements the new concept of Small Data.big-data_1

In fact, Small Data answers to the need of controlling “on-premise” data dimension and granularity. Anyway, Small Data is not data volume limitation. Small Data affords data modeling full configuration and provides 2 main advantages: data model scale and data ownership that provide assigned data deployment and, finally, data deletion in the Cloud.

Introduction

The inheritance coming from the past imposes to manage big data as a consequence of multiple integration and aggregation of data systems and data movement. Data coming from Social Networks intensive data applications have contributed to blow up the EB containers.

Administrating big data is not an option but rather a debt contracted above all with the data management history. Anyway, Big Data analytics seems to be a standard practice in storing massive data. Is there any way to change this norm? Companies have for many years used data models by designing and mapping data, started the change and today manage the “tiller” of their data heritage.

Accordingly, Small Data is far from a further catch-phrase: the antonyms with respect Big Data aims to pay attention in changing mind and using data models to design fit, noticeable data systems, especially when data are moved to the Cloud. MaaS implements Small Data, enables Personal Cloud and help to recover, to order and to classify, data inheritance of the past as well.

Why MaaS defines and implements Small Data
MaaS meets the ever-increasing need for data modeling and provides a solution to satisfy continuity of data design and application. Further, it helps in choosing and defining architectures, properties and services assets looking at the possible evolution and changes the data service can have. MaaS meets scaling and dimensioning in designing data systems, supports scalability and, in particular, agile data modeling appliance. Small Data starts when the dimension of the data system has to be defined and controlled since the description (metadata). In effect, we are improperly speaking of data systems: actually, we are dealing with data services. The Cloud is the great supplier of data services.

Since MaaS allows defining “on-premise” data design requirements, data topology, performance, placement and deployment, models themselves are the services mapped in the Cloud. In fact, data models allow to “on-premise” verify how and where data has to be designed to meet the Cloud service’s requisites. In practice, MaaS enables in designing the data storage model. The model should enable query processing directly against databases to strengthen privacy levels and secure changes from database providers;

– Modeling data to calculate “a priori” physical resources allocation. How many resources does the service need? Do database partitions influence resource allocation and/or replication? Modeling the data means designing the service; calculating a priori these magnitudes drives both deployment and database growth;

– Modeling data to predict usage “early” and to optimize database handling. Performance and availability are two of the main goals promised by the Cloud. Usage is not directly dependent upon the infrastructure and, as a consequence, could be a constraint. Calculating the usage rate means understanding the data application life cycle and then optimizing the data service properties;

– Designing multi-data structures to control databases elasticity and scalability. Models contain deployment properties and map the target database service. Therefore, the model designs “on-premise” database elasticity and scalability. Still, we will see later that multi-database design is a way to control data persistence.

Thus, imagine we have users asking for temporary services to be deployed and, after services have been closed, cancelled. Services satisfying these requirements are based upon data models designed by MaaS agile model techniques, controlled size and contents, fast update, rapid testing and continuous improvement forward the generation of the model/dataset to the target database. Models should set users’ data contents, dimension (for example, to suit to mobile services), data deployment (geo-location, timing …) and to be, on-demand, definitively destroyed.

With MaaS, Small Data can be defined since the metamodel design and then implemented, synchronized and deployed to the target by applying the DaaS lifecycle. Of course although, Small Data are placed under size and content control, they can be created and replicated infinitively: is this a further way to switch to Big Data again?

The following aspects would be considered:

1) Users should be enabled to destroy their Small Data in the Cloud when the service is over. This is a great feature for data security. Still, data navigation changes from infinite chains to point-to-point gaps (or better “Small to Small” Data) based upon Small Data models definition;

2) Time limit could be set or an over storage timing strategy might be defined for Small Data allocation in the Cloud;

3) Statistically speaking, by applying Small Data the average of data storing coming from intensive data applications (Social Networks, for example) should be computed and standard deviations estimated due to multiple storage allocation and, above all, free.

This doesn’t mean the YB is the last order of magnitude of data size the SI has defined. Definitively, MaaS enables service designers to plan, create and synchronize Small Data models from and to any web data source and data management system in the Cloud. Data granularity changes and enables designers to calibrate data models and data services by defining model’s content that allows users to deploy i.e. allocate and then, at the end of the service, to shred the data in the Cloud. This is a new frontier for Mobile services, Open Data and what today might be defined as Personal Cloud.

MaaS enables Personal Clouds
What is really new in the Small Data definition is the likelihood to move the data ownership from big players to users and, as a consequence, place under relation ownership and deployment location. Today data storage is almost under provider full control. Providers and storage players manage data users and data security. By applying MaaS, Personal Cloud is enabled and following here is what change:

1) Integrity defined into MaaS at the Small Data level is maintained through the service. Ownership matches the data structure/dataset deployed;

2) MaaS identifies trust boundaries throughout the IT architecture. Data models are the right way to define trust boundaries and ownership to prevent unauthorized access and sharing in “Small to Small” Cloud storage and navigation;

3) MaaS enables to set location in the data model. Any mismatch could be an infringement and must be reconciled with the terms registered in the Small Data design. Point to point navigation based upon Small Data simplifies Personal Data management and maintenance: this aspect has a positive impact on data storage order and security;

4) Ownership and data location are linked by relation. Once Personal Data in the Cloud has to be deleted, the Cloud Provider should assure the data are unrecoverable. Looking at data model mapping, data has to be destroyed in the location defined in the Small Data design. Data owners know where the data has been deployed because before opening the data service in the Cloud they may accept or not the location assigned and might ask for a new storage site.
Personal Cloud has large ranges of applications:

1) Mobile services in single/multiple personal storage by single/multiple dataset in the Cloud;

2) Healthcare services as introduced, for example, in [10] and in [11];

3) Open Data services, especially when they are interfaced to the above 1) and 2) services;

4) HR services, mainly when they concern curricula content. Owners should be free to definitively cancel personal data as defined in Small Data designing;

5) Generic personal data in the Cloud regardless they are permanent or temporary stored.

Applying MaaS, Small Data can be considered on-premise services because they collect behaviours and information concerning structures (to be deployed in the Cloud), access rights, security and scaling, partitioning and evolution. In other words, Small Data ensure that the behaviour and effectiveness of the released Cloud applications can be measured and tracked to meet user’s needs. Models leverage Cloud data services to enable flexible deployment and therefore enforce Personal Data persistence, storage and geo-location policies throughout the Cloud.

Conclusion
Big Data is a consequence, Small Data is a new start. MaaS provides Best Practices and guidelines to implement Small Data and Personal Cloud ownership by starting from data modeling and DaaS lifecycle. We want to underline that Big Data as we know is a consequence of how companies have for many years used, stored and maintained data. Small Data indeed might be a new way to manage data in the Cloud. Especially when personal data are considered, Personal Cloud provides, on one hand, a preconfigured and operational data definition (for example, local information vs. cloud information) and, on the other hand, the details of how to enable provisioning and deployment of multiple storage in the Cloud. Finally, starting from “on-premise” Small Data design, Personal Cloud can be applied and users can have soon an understanding of Cloud deployment, data centre geo-locations and service constraints.

Glossary
Big Data – Collection of datasets cannot be processed (analysis, storage, capture, search, sharing, visualization …) using on-hand database management tools or traditional data processing applications. It is due to data complexity, data amount and fast growth;
EB – Exabyte, unit of information or computer storage equal to one quintillion bytes (1018 bytes);
DaaS: Database as a Service;
MaaS: Model as a Service is a trade mark (MaaS);
SI – Système International d’unités metric prefix ;
YB – Yottabyte, unit of information or computer storage equal to one septillion bytes (1024 bytes).

References
[1] N. Piscopo – ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
[2] N. Piscopo – CA ERwin® Data Modeler’s Role in the Relational Cloud
[3] D. Burbank, S. Hoberman – Data Modeling Made Simple with CA ERwin® Data Modeler r8
[4] N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
[5] N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle
[6] N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data https://cloudbestpractices.wordpress.com/2012/10/21/maas/
[7] N. Piscopo – MaaS Workshop, Awareness, Courses Syllabus
[8] N. Piscopo – DaaS Workshop, Awareness, Courses Syllabus
[9] N. Piscopo – Applying MaaS to DaaS (Database as a Service) Contracts. An introduction to the Practice https://cloudbestpractices.wordpress.com/2012/11/04/applying-maas-to-daas/
[10] N. Piscopo – MaaS applied to Healthcare – Use Case Practice, https://cloudbestpractices.wordpress.com/2012/12/10/maas-applied-to-healthcare/
[11] N. Piscopo – MaaS and UMA implementation at page 16 in Transform2:, https://cloudbestpractices.files.wordpress.com/2013/01/transform-203.pdf
[12] Agile Modeling – http://www.agilemodeling.com/

Disclamer
“MaaS implements Small Data and enables Personal Clouds” (the Document) is provided AS-IS for your informational purposes only. In no event the contains of the document will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this document or the products, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document or the products’ use or performance. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies/offices.

Getting Real with Ruby: Understanding the Benefits

By Jennifer Marsh

Jennifer Marsh is a software developer, programmer and technology writer and occasionally blogs for Rackspace Hosting.

Ruby is an advanced language for many programmers, but it’s a powerful language used to make dynamic interfaces on the web. Dynamic web hosting shouldn’t be taken lightly because security holes still exist. A good cloud web host will offer a safe environment for development while still offering scalability and usability for Ruby programming, testing and deployment.

Space for Testing and Development

Web applications can grow to several gigabytes. For newer Ruby developers, it’s helpful to have enough storage space for backups, so a backup can be made to support the deployed code changes. Ruby is an interpreted language, but a bug can still mean a lot of time and resources devoted to discovery and fixing. Instead of emergency code reviews, the developer can restore the old version of the application before troubleshooting bugs.

Support for Database or Hard Drive Restoration

In severe cases, the application corrupts the data stored in the database. A good web host will backup the database and then restore it when the site owner needs it restored. This is especially useful in emergencies when the site gets hacked or data is corrupted due to application changes or hard drive crashes. The web host should support the client, including in cases of restoring database and application backups.

Find Support for Ruby

To run Ruby, the web host must support the framework. Check with the hosting company, and verify the host allows execution of CGI files. A good way to check is to find a host that has FastCGI and specifies that it supports Ruby and Ruby on Rails. Ruby is typically supported by Linux hosts, but some Windows hosts will support Ruby. Ruby is an interpreted language like Java, so it can run on any operating system.

Ask for Shell Access

Ruby configuration can be a bit hairy to configure. If the programmer is familiar with the language, having shell access helps speed up application configuration. Not all hosts will offer shell access, but with extended or advanced service, most hosts will oblige the webmaster. Shell access gives the webmaster more control of the Ruby settings.

The most important part of a web host is customer support an up-time. Most web hosts have a contract with the client that promises a percentage of up-time. This should be around 99%, meaning the website will be up for visitors. Check with the host for contract specifics before purchasing cloud hosting for Ruby.

The Evolution of Single Sign-on

Replacing mainframes with 21st century identity

By Paul Madsen, senior technical architect

The concept of single sign-on (SSO) is not a new one, and over the years it has successfully bridged the gap between security and productivity for organizations all over the globe.

Allowing users to authenticate once to gain access to enterprise applications improves access security and user productivity by reducing the need for passwords.

In the days of mainframes, SSO was used to help maintain productivity and security from inside the protection of firewalls. As organizations moved to custom-built authentication systems in the 1990’s, it became recognized as enterprise SSO (ESSO) and later evolved into browser-based plugin or web-proxy methods known as web access management (WAM). IT’s focus was on integrating applications exclusively within the network perimeter.

However, as enterprises shifted toward cloud-based services at the turn of the century and software-as-a-service (SaaS) applications became more prevalent, the domain-based SSO mechanisms began breaking. This shift created a new need for a secure connection to multiple applications outside of the enterprise perimeter and transformed the perception on SSO.

ping-cloud1Large-scale Internet providers like Facebook and Google also created a need for consumer-facing SSO, which did not previously exist.

Prior to these social networks, SSO was used only within the enterprise and new technology was created to meet the demands of businesses as well as securely authenticate billions of Internet users.

There are many SSO options available today that fit all types of use cases for the enterprise, business and consumer, and they have been divided into three tiers—Tier 1 SSO being the strongest and most advanced of the trio. Tier 1 SSO offers maximum security when moving to the cloud, the highest convenience to all parties, the highest reliability as browser and web applications go through revisions and generally have the lowest total cost of ownership. Tier 2 SSO is the mid-level offering meant for enterprises with a cloud second strategy. Tier 3 SSO offers the least amount of security and is generally used by small businesses moving to the cloud outside of high-security environments.

The defining aspect of Tier 1 SSO is that authentication is driven by standards-based token exchange while the user directories remain in place within the centrally administered domain as opposed to synchronized externally. Standards such as SAML (Security Assertion Markup Language), OpenID Connect and OAuth have allowed for this new class of SSO to emerge for the cloud generation. Standards are important because they provide a framework that promotes consistent authentication of identity by government agencies to ensure security.

These standards have become such a staple in the authentication industry that government agencies like the United States Federal CIO Council, NIST (National Institute of Standards and Technology) and Industry Canada have created programs to ensure these standards are viable, robust, reliable, sustainable and interoperable as documented.

The Federal CIO Council has created the Identity, Credential, and Access Management (ICAM) committee to define a process where the government profiles identity management standards to incorporate the government’s security and privacy requirements, to ensure secure and reliable processes.

The committee created the Federal Identity, Credential, and Access Management (FICAM) roadmap to provide agencies with architecture and implementation guidance that addresses security problems, concerns and best practices. Industry Canada’s Authentication Principles Working Group created the Principles for Electronic Authentication which was designed to function as benchmarks for the development, provision and use of authentication services in Canada.

As enterprises continue to adopt cloud-based technologies outside of their network perimeter, the need for reliable SSO solutions becomes more vital. Vendors that support these government-issued guidelines offer strongest and most secure access management available today. Since the establishment of SSO, the technological capabilities have greatly advanced and SSO has been forced to evolve over the past few decades. First generation SSO solutions were not faced with Internet scale or exterior network access, whereas today’s SSO is up against many more obstacles.

As IT technology progresses in the future, SSO will have to grow with it and strengthen its security. For instance, while SSO is the expectation for web browser applications, the emergence of native applications (downloaded and installed onto mobile devices) has hilted the necessity of a similar SSO experience for this class of applications. To address these new use cases, new standards (or profiles of existing standards) are emerging and initiatives like the Principles for Electronic Authentication will have to adapt accordingly in order to offer the best guidance possible.

“Policy as a Service” – Critical for Cloud Deployments!

ulrichThe financial ROI of Cloud security and compliance is judged by decision makers in end-user organizations by the same measures as is done for Cloud computing in general, i.e. by how much it cuts up-front capital expenditure and in-house manual maintenance cost.

However, manually translating security policy into technical implementation is difficult, expensive, and error-prone (esp. for the application layer). In order to reduce security related manual maintenance cost at the end-user organization, security tools need to become more automated.

With the emergence of Cloud PaaS, it is therefore logical to move all or parts of the model-driven security architecture into the Cloud to protect and audit Cloud applications and mashups with maximal automation. In particular, policies are provided as a Cloud service to application development and deployment tools (i.e. “Policy as a Service”), and policy automation is embedded into Cloud application deployment and runtime platforms (i.e. automated policy generation/update, enforcement, monitoring).

Different Cloud deployment scenarios are possible, which differ from local non-Cloud deployments where model-driven security is conventionally installed within or alongside a locally installed development tool (e.g. Eclipse). Policy as a Service (see ObjectSecurity OpenPMF) involves five parts:

1. Policy Configuration from the Cloud: Policy configurations are provided as subscription-based Cloud service to application development tools. Offering specification, maintenance, and update of policy models as a Cloud service to application developers and security experts has significant benefits:

Most importantly, instead of having to specify (or buy and install) and maintain the policy models used for model-driven security on an on-going basis, application developers and security specialists can now simply subscribe to the kinds of policy feeds they require without the need to know the details of the models.

The Policy as a Service provider (typically different from the Cloud provider) takes care of policy modeling, maintenance, and update. Other benefits are that the user organization does not need to be a security and compliance expert because the up-to-date policy models will be provided as a feed to them on an on-going basis, that the upfront cost hurdle is minimized thanks to the subscription model, and that there is no need by the end user organization to continually monitor regulations and best practices for changes.

2. Automatic Technical Policy Generation in the Cloud: The automatic policy generation feature of MDS is integrated into the development, deployment, and mashup tools (to get access to functional application information).

It consumes the policy feed described in the previous section. Platform as a Service (PaaS) sometimes includes both Cloud hosted development and mashup tools and a Cloud hosted runtime application platform. In this case, automatic technical policy generation using model-driven security (MDS) can also be moved into the Cloud, so that technical security policies can be automatically be generated for the applications during the Cloud hosted development, deployment and/or mashup process.

This is in particular the case for mashup tools, because those tools are more likely to be Cloud hosted, are often graphical and/or model-driven, and are concerned with interactions and information flows between Cloud services. If the development tools are not hosted on the PaaS Cloud, then the MDS technical policy auto-generation feature needs to be integrated into the local development tools.

3. Automatic Security Policy Enforcement in the Cloud: Policy enforcement should naturally be integrated into the PaaS application platform so that the generated technical policies are automatically enforced whenever Cloud services are accessed.

As described in the previous section, policies are either generated within Cloud using hosted MDS and PaaS development tools, or are uploaded from local MDS and development tools. How policy enforcement points are built into the PaaS application platform depends on whether the PaaS application platform (1) allows the installation of a policy enforcement point (e.g. various open source PaaS platforms, e.g. see case studies below), (2) supports a standards based policy enforcement point (e.g. OASIS XACML), or (3) supports a proprietary policy enforcement point.

4. Automatic Policy Monitoring into the Cloud: Policy enforcement points typically raise security related runtime alerts, especially about incidents related to invocations that have been blocked. The collection, analysis and visual representation of those alerts can also be moved into the Cloud.

This has numerous benefits: Incidents can be centrally analyzed for multiple Cloud services together with other information (e.g. network intrusion detection). Also, an integrated visual representation of the security posture across multiple Cloud services can be provided, integrated incident information can be stored for auditing purposes, and compliance related decision support tools can be offered as a Cloud service.

5. Automatic Updating: The described model-driven approach enables automatic updates of technical security policy enforcement and auditing whenever applications and especially their interactions, change. The same automation is possible when security policy requirements change.

Publications about this can be found in the ISSA Journal October 2010 and on IBM developerWorks. Contact me if you would like to know more information about Policy as a Service.

It is also important to note that model-driven security (MDS) does not necessarily rely on model-driven development to work – even though it relies on application, system, and interaction models (so-called “functional models”) to achieve significant security policy automation.

The traditional MDS approach is that these functional models ideally come from manually defined application models authored during model-driven development (e.g. UML, BPMN). But this is not necessary. We have designed an additional solution for our OpenPMF where the functional models are in fact obtained from an IT asset management tool that is part of our partner’s (Promia, Inc.) intrusion detection/prevention product Raven. This works well, and enables the use of model-driven security in environments which do not support model-driven development or where model-driven development is not desired.

While this may not sound like a big deal, it is in fact a big deal, because it increases the widespread applicability of model-driven security dramatically, and makes adoption a lot easier.

(note: this was cross-posted from www.modeldrivensecurity.org) by Dr. Ulrich Lang, CEO, ObjectSecurity

Standard G-Cloud Provider Profile

scorecard1As described in this previous blog, we started the process of collectively building a standardized profile that Cloud Providers can use to describe their services, hosting facilities and other factors that are important to client, so they can make like for like comparisons.

We’re very lucky in that my first amateur effort has thankfully been superceded with a much more in-depth version.

This has been contributed by Paul Waine, who works as the Lead G-Cloud Architect at the Home Office, so this is the same standard profile they use for G-Cloud Providers:

Download: G-Cloud Hosting Provider Profile (XLS spreadsheet)

Feel free to download and use to define your own service provider profile, and be sure to let us know how it can be improved – Join in and post feedback comments on this discussion thread in our Linkedin group.