Agile Big Data and Many-Particle approach change Marketing and Sales effectiveness

Big data projects have broad impact on organizations. Big Data implementation overtakes
what normally Many-Particle data aggregation goodcould be considered a new way to conduct data management to business alignment. With Big Data the path from data sources to data intelligence changes drastically. The way to design and implement data intelligence definitively changed access, ingest, distil, processes and data visualization as well. Big data projects meet agile implementation, shorten the data intelligence lifecycle by increasing services capability and adequacy to fast-growing datasets, fast moving business. Accordingly, agile practice and many-particle approach minimize data entropy together with data access time cycles everywhere, preserve data security and enhance user experience to business instant realignment.

Contents
Introduction
Data Topology and Agile Big Data
The Many-Particle approach
Conclusion
Acknowledgment
References

Introduction
The way to move from today business data into Big Data intelligence could be a costly and time consuming process that could decrease the tremendous advantage of the Big Data and Cloud paradigms. Today, information is still misaligned with the business although the huge efforts of the past business intelligence projects: companies still use partial quantities of the real corporate data heritage. As a consequence, the data spectrum exploited is unpredictable and the process to align data and business is a long-term process. Agile Big Data aligns instantly data heritage and business data. Continuous data ingestion and distillation drastically reduces ETL process to run intelligence on the “big data-lake” when needed. Then, on-premise big data topology and functional data intelligence have a crucial role to meet profitability, customer affinity and fast moving business goals. This paper introduces the business case for Big Data to avoid Marketing and Sales data entropy, reduce risks and increase the likelihood of an aware and successful Big Data implementation.

Data Topology and Agile Big Data
Documenting data evolution and updating in the past could be considered a good practice in managing data. In the beginning of cloud paradigm, due to the cost cut down attraction, the practice to have a map of the company data heritage became a great benefit especially when services have to be subscribed in the cloud. Data models, a way to document the data heritage, evolved into MaaS (Model as a Service) that supports agile design and deliver of data services in the Cloud and makes the difference in planning a Big Data implementation project.

Considering data models doesn’t mean structured data only. On-premise models map data coming from structured, semi-structured and non-structured sources. Data models maps defined services topology would be moved on-premise or in the cloud. Still data models is needed for early exploration analysis and “ab-initio” services classifying parameters which define services boundaries (to personal cloud, financial parameters or healthcare positions, for example); data models (on SQL, No-SQL, Vectors or Graph structures) essentially doesn’t address the meaning the data have but identify the services’ classes before creating the data-lake. Of course, into the data-lake converge unusable data, unstructured or denormalized raw datasources as well. The more aware is the on-premise topology, the more secure and localizable is the big data usage both on-premise and in the Cloud. Further, agile MaaS approach reveals business process affected, operating requirements and stakeholders.

Big Data CorporateFig. 1 – Corporate Data-Lake and Agile Big Data approach

Accordingly, agile Big Data practice sets the link among on-premise data topologies and on-premise or in the cloud data intelligence. Topology leverages the company services asset into specific business objectives and will determine the successful user experience requirements and the proper rapid alignment with respect to the competitors.

This means that two crucial aspects have to be taken care:

  • Data is the “compass” to understand services capacity, stakeholders, culture of the organization: big data agility is based on data-driven approach. Therefore, in the incoming project setup minimize functional data behaviour. Use MaaS topology to define projects use cases data-driven. Data-driven project design defines data ingestion architecture and data landing into the data-lake and assist in understanding the best policy for continuous data feeding. Do not disregard this aspect: accurate data feeding is the core of Big Data approaches;
  • Move data analysis and functional aggregation to data intelligence applied on the data-lake. During ingestion and data landing data treatments have to be minimized Agile Big Data approach considers 2 zones: the in-memory one, based on data topology and on-premise supported by MaaS and data intelligence based on functional analysis and programming working on spare data.

Still, minimize any approach based on “ab-inizio” technology and software development. The Big Data ecosystem provides excellent platforms and MaaS agile approach helps to shift later the final technology choice/selection. Further, MaaS agile practice assists to clarify successes and failures zone and set expectations by time. This happens why when services have been set by on-premise topology then a link has been stretched among the data heritage and the data intelligence. There are no constraints between the raw data (documented or not) and the user experience that will leverage functional and business alignment. In the middle, only the data-lake exists, continuously changing and growing, continuously supplying information for the data intelligence ending.

The Many-Particle approach
Today, more of 70 percent of the world’s information is unstructured, not classified and, above all, misused: we are assisting to the greatest Marketing and Sales data myopia since they exist. Still, there is no awareness of the Big Data benefits for service and/or product companies, and again how the product’s companies can change their services based on goods production: great amount of data, exceptionally growing, high entropy, unknown correlations and limited data usage. The concept of on-premise topology introduces services as data-driven aggregation states applied to given parts of the data-lake. But this is what happens to many-particle system instability (yottabyte is 1024 byte with a binary usage of 280). Big data storages dimension near data-lake to many-particle systems. This vision destroys any traditional approach to Marketing and Sales.

If we consider the big data-lake, it contains fast moving content in order of data affinity and mass correlation. Depending upon dynamic data aggregation, data topologies may change by tuning on-premise data mapping. Consider data-lakes are mainly fed through:

– ingestion, distillation and landing from content based (datasources, datasets, operational and transactional DB’s);
– ingestion and distillation from collaborative feeding (dynamic collections of large amount of information on users’ behaviours coming from the internet, direct and/or indirect).

Collaborative ingestion can be managed as a content based as well in case of time needed to data intelligence ending has no strict constraints so to define a third method, the hybrid one.

This brief introduction tries to explain that the data-lake maps ab-initio topologies to services but also may classify more ecosystems the services are defined and applied to. Services live in the ecosystems and ecosystems depend upon data aggregation (why used, where used, how used, who uses) and just like aggregation states, big data density change dynamically. These changes are a consequence of datasources ingested, users experiences, customers behaviours, ecosystems interaction and, of course, business realignment. Marketing and Sales should change accordingly. But since data-lake may grow by 40 percent per year (in line with the estimation of the worldwide rate of information growth taking into account that unstructured data is growing 15 times faster than structured data – source IBM®), there is no way to get any (predictive) control for marketing and sales organization although data warehousing and/or sophisticated traditional data mining and analysis are in place.

Anyway, the data growth will be greater than ever in the next years and so the variance for data aggregation in the data-lake will have an exponential rising: this means many opportunities could be lost and again further marketing and sales entropy. Ab-initio topology by agile big data approach and functional programming applied to the data-lake supply the best answer for prescriptive analysis on many-particle big data systems. In fact, the data-lake allows to work on data cross-aggregation optimization, customer experience and aggregation states for services realignment with respect to the business ecosystems. Still, data-lake is an extraordinary real-time “what-if set” for prescriptive scenarios, data processing assumption and data risk propensity.

Data-Sea

Fig.2 – The Data-Lake is quickly becoming a Data-Sea with multi-particle-like data behaviour and dimension

Banking and Goods Production are 2 typical examples of Big Data agile implementation. Both are supplying services. Both are trying to align instantly and proactively offer and business changes. Banking and Financial services play a strategic role in relationship management, profitability performance to corporate groups, client companies and commercial banking networks. This is why financial applications need to be rapidly synchronized to ecosystems fluctuations states as ecosystem participants’ change everywhere their behaviour due to local and international business conditions. Functional big data paradigm working on many-particle data aggregation is prescriptive with respect to unpredictable services transition: it agilely realigns ecosystem services directions over on-premise data topologies mapping.

Goods production may tune services as a consequence of user’s experience by, for example, executing more focused and less time-consuming recommender systems. Goods production companies are in the run to provide personalized technical and commercial services, greater client loyalty and prescriptive offers starting soon when the clients interact or navigate the company website. With agile big data and many-particle approach, goods production potentially increases user similarity by data-lake massive data aggregations. Fast moving data aggregations constantly feed functional data intelligence to services realignment and topological correlations repositioning on-premise data similarities.

Two different paces, the same objective: be prescriptive, understanding “at earlier” which data aggregation state is the most proper along the data-lake instability and then contiguously realign products offer, services configuration and, consequently, keep ecosystems oversee: on-premise topology gauged on data-lake volume, data velocity and variety allows Marketing and Sales to tune on effective data aggregation to promptly adjust services to the ecosystem.

Conclusion
Client sentiment and user experience behaviour analytics allow rapid changes to product offerings or customer support which in turn enhance customer fidelity and business improvement. However data are growing exponentially and business alignment have to be provided in more decentralized environments. Agile MaaS approach based on data-driven raw volume, data velocity and variety together with on-premise services topology is a relatively low cost and light model. Topology does not influence data treatment. Data remains intact although services integrity and classification drive business, user experience and ecosystems alignment. Accordingly, agile practice and many particle approach we introduced minimize data entropy together with data access time cycles everywhere, preserve data security and enhance user experience to functional visualization realignment.

Acknowledgment
I sincerely thank Paolo La Torre for his precious feedback on contents and encouragement on publishing this paper. Paolo is working as Commercial, Technical and Compliance Project Supervisor for Big Data planning and engagement directions in finance and banking.

References
N. Piscopo, M. Cesino – Gain a strategic control point to your competitive advantage – https://www.youtube.com/watch?v=wSPKQJjIUwI
N. Piscopo – ID Consent: applying the IDaaS Maturity Framework to design and deploy interactive BYOID (Bring-Your-Own-ID) with Use Case
N. Piscopo – A high-level IDaaS metric: if and when moving ID in the Cloud
N. Piscopo – IDaaS – Verifying the ID ecosystem operational posture
N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data
N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
N. Piscopo – Applying MaaS to DaaS (Database as a Service ) Contracts. An introduction to the Practice
N. Piscopo – MaaS applied to Healthcare – Use Case Practice
N. Piscopo – ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
N. Piscopo – CA ERwin® Data Modeler’s Role in the Relational Cloud
N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle

Disclaimer – This document is provided AS-IS for your informational purposes only. In no event the contains of “Agile Big Data and Many-Particle approach change Marketing and Sales effectiveness ” will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this documentation, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document use or performance. All trademarks, trade names, service marks, figures and logos referenced herein belong to their respective companies/offices.

Advertisements

New white paper: BYOD – Bring Your Own Doctor

big-dataThe headline theme for our next seminar is ‘BYOD – Bring Your Own Doctor’.

UPDATE: I am also presenting on this topic at the upcoming 13th annual eHealth Summit – Read more.

Mobile Big Data Cloud Computing

This will focus on the overall best practices for Mobile Big Data Cloud Computing, in particular focusing in on Google and the Cloud suite they offered to achieve this.

We describe this scenario in more detail in this short executive briefing white paper:

Download the white paper (7 page PDF):

pdf-small1BYOD – Bring Your Own Doctor

Google Compute enters the IaaS market

Although there are a number of suppliers in the Cloud market, most accept and note the primary dominance of Amazon, and the distinctions in scale of the Cloud that they operate.

The Cloud providers market is like McDonalds – S, M, L and Super-size!

So while there are announcements all the time about hosting providers launching Clouds to take on goliath, there’s only a few big boys playing around at the Super-size level: Amazon, Google, Salesforce, Microsoft.

So in contrast you definitely take note when that player is Google – announcing their own IaaS, Google Compute.

If you were to pick any one who could be considered a large enough data centre operator to trouble someone of Amazon’s size it would be Google – There’s no doubt they understand, and operate, very large-scale computing.

So leveraging that scale is as smart a move with them as it is with AWS, and of course more competition is always good for buyers.

Google Enterprise

This means one first question for enterprise CIO’s to consider is what is the whole Google Cloud suite – They now offer IaaS, PaaS and SaaS, and this comprehensive “full Cloud stack” approach is very interesting from an outsourcing point of view. This is what I describe as ‘Google Enterprise’ best practices.

Customers would be able to explore a whole exploitation of what will be one of the world’s most powerful Cloud environments, catering for different app and functional needs, eg.

  • IaaS for legacy applications
  • PaaS for more ad-hoc marketing-driven online applications
  • SaaS for switching to Google Apps for the office suite

Cloud service brokerage

Therefore the fundamental question is how this different set of services can be unified into a single customer outsourcing scenario, and how you would then also bring in and integrate with the other Cloud services: Salesforce.com, Azure for .net, etc.

Most large corporations have this spread mix of IT and we’re likely to see it across Clouds too, as those suppliers also specialize.

This is a good thing and especially if Cloud Service Broker models will help unite these into singular delivery methods that support business initiatives like Single Customer View and so forth.

Integrated “App to App Synergy” is key to Cloud ROI and enterprise social media strategy

business-valueThe Forrester report ‘Total Impact of Google Apps‘ provides a framework for identifying the ROI from switching from a legacy mail and collaboration platform to Google Apps.

Surveying 200+ organizations with over 1,000 employees, Forrester identified the main business benefits required to build an ROI framework for moving to a Cloud-based service, which essentially broke down into two main categories:

  • IT infrastructure cost savings
  • Staff productivity increases

The IT infrastructure savings provides the hard numbers for justifying the business case, enabling cost reduction in areas you would expect like software licences, hardware and IT administrator costs, and delivers an ROI:

  • Break even within 1.4 months
  • 329% risk-adjusted ROI
  • A Net Present Value of over $10m following an investment of $400k

For many organizations the catalyst for the switch is a degrading email system experiencing performance issues that have become intolerable to the mission critical status that email now has for the organization.

So this means making the move solves this burning need and simultaneously maximizes cost savings through avoiding future costs related to maintaining on-site IT infrastructure, such as winding down their need for equipment like VPNs.

google_plusApp to App Synergy

This immediate payback means that the additional ROI benefits from staff productivity are a ‘free bonus’ on top, however it’s important to note just how beneficial these are and how the business can exploit them.

Indeed Forrester then make the critical point that even greater business impact is enjoyed through increases in staff productivity, highlighting features and benefits like more efficient document collaboration and more effective virtual meetings.

These are more intangible ROI benefits, not as easy to quantify in immediate $$ dollar terms, however by considering them within a broader context of process improvement, and identifying how these personal productivity gains are component parts of broader workflow enhancements, then senior executives can start to relate to them in terms important to them.

For example:

  • Sales teams producing more client proposals faster
  • Quicker resolution of customer service issues
  • Improved technical documentation

A key mechanism to achieving this productivity improvement is what Google describe as ‘App to App’ synergy, referring to the SSO (Single Sign On) environment that Google offers for the suite of collaboration tools. They highlight how staff enjoy time-saving productivity boosts by accessing email, VoIP and video conferencing all from within the web browser.

Social Workflow – 9x Process Improvements

This may seem like a relatively unimportant technical feature in the grand scheme of enterprise applications, however when you consider that one of the primary issues IT faces is user adoption of new applications you can see just how key it actually is.

Indeed these principles are crucial for organizations also considering their enterprise social media strategy – How to internalize Web 2.0 tools like building your own private Linkedin type site or encouraging better knowledge sharing through staff use of blogs and wikis.

In this article the originator of this Enterprise 2.0 concept Andrew McAfee described how these new social media technologies faced the challenge that use of email was so entrenched that the new tools would need to be 9x more useful for them to switch, not just a little better.

Given just how much staff work within email on a day to day basis it is probably not even possible that they even would at all, and so Google addresses the situation by “bringing the mountain to Mohammed” – They embed these new collaboration methods directly into email.

There is no need to switch out to different social media and collaboration apps, they are built direct into the email interface itself.

e1This blending together is key to unlocking the transformational power of these technologies.

In an earlier white paper that builds on Andrew’s Enterprise 2.0 work, ‘Harnessing the Wikipedia Effect’, I described how it enables ‘Knowledge Process Management’, referring to blending together the previously separate applications for Knowledge Management, those for Business Process Management and also the communications and collaboration apps.

The trend is also very effectively described in this article – Enterprise Apps Get Social.

What Google Apps is offering is this powerful effect distilled into an easily accessible online service, the essence of the business value of Cloud Computing.

Big Data PaaS – Reference architecture for Big Data Cloud Computing

big-data-cloud2Our next seminar is scheduled for April 24th in New York City, with a headline theme of Big Data Cloud Computing.

Specifically the agenda will be how to build a Big Data PaaS – Platform as a Service.

This combines the NIST Cloud model for the middleware stack with the corporate agenda to master the swathes of structured and unstructured content they wrestle with.

An initial reference architecture for this is described in the Canada Health Infoway cloud strategy document, read more here.

Our event will act as a forum to develop and build on this to define a detailed best practice model.

MaaS implements Small Data and enables Personal Clouds

Abstract – MaaS (Model as a Service) sets a new concept to order and classify data modeling design and deployment to the Cloud. MaaS changes the way to move data to the Cloud because allows to define data taxonomy, size and contents. Starting from data model design, MaaS might guide the DaaS (Database as a Service) lifecycle, providing data granularity and duty rules: as a consequence, MaaS implements the new concept of Small Data.big-data_1

In fact, Small Data answers to the need of controlling “on-premise” data dimension and granularity. Anyway, Small Data is not data volume limitation. Small Data affords data modeling full configuration and provides 2 main advantages: data model scale and data ownership that provide assigned data deployment and, finally, data deletion in the Cloud.

Introduction

The inheritance coming from the past imposes to manage big data as a consequence of multiple integration and aggregation of data systems and data movement. Data coming from Social Networks intensive data applications have contributed to blow up the EB containers.

Administrating big data is not an option but rather a debt contracted above all with the data management history. Anyway, Big Data analytics seems to be a standard practice in storing massive data. Is there any way to change this norm? Companies have for many years used data models by designing and mapping data, started the change and today manage the “tiller” of their data heritage.

Accordingly, Small Data is far from a further catch-phrase: the antonyms with respect Big Data aims to pay attention in changing mind and using data models to design fit, noticeable data systems, especially when data are moved to the Cloud. MaaS implements Small Data, enables Personal Cloud and help to recover, to order and to classify, data inheritance of the past as well.

Why MaaS defines and implements Small Data
MaaS meets the ever-increasing need for data modeling and provides a solution to satisfy continuity of data design and application. Further, it helps in choosing and defining architectures, properties and services assets looking at the possible evolution and changes the data service can have. MaaS meets scaling and dimensioning in designing data systems, supports scalability and, in particular, agile data modeling appliance. Small Data starts when the dimension of the data system has to be defined and controlled since the description (metadata). In effect, we are improperly speaking of data systems: actually, we are dealing with data services. The Cloud is the great supplier of data services.

Since MaaS allows defining “on-premise” data design requirements, data topology, performance, placement and deployment, models themselves are the services mapped in the Cloud. In fact, data models allow to “on-premise” verify how and where data has to be designed to meet the Cloud service’s requisites. In practice, MaaS enables in designing the data storage model. The model should enable query processing directly against databases to strengthen privacy levels and secure changes from database providers;

– Modeling data to calculate “a priori” physical resources allocation. How many resources does the service need? Do database partitions influence resource allocation and/or replication? Modeling the data means designing the service; calculating a priori these magnitudes drives both deployment and database growth;

– Modeling data to predict usage “early” and to optimize database handling. Performance and availability are two of the main goals promised by the Cloud. Usage is not directly dependent upon the infrastructure and, as a consequence, could be a constraint. Calculating the usage rate means understanding the data application life cycle and then optimizing the data service properties;

– Designing multi-data structures to control databases elasticity and scalability. Models contain deployment properties and map the target database service. Therefore, the model designs “on-premise” database elasticity and scalability. Still, we will see later that multi-database design is a way to control data persistence.

Thus, imagine we have users asking for temporary services to be deployed and, after services have been closed, cancelled. Services satisfying these requirements are based upon data models designed by MaaS agile model techniques, controlled size and contents, fast update, rapid testing and continuous improvement forward the generation of the model/dataset to the target database. Models should set users’ data contents, dimension (for example, to suit to mobile services), data deployment (geo-location, timing …) and to be, on-demand, definitively destroyed.

With MaaS, Small Data can be defined since the metamodel design and then implemented, synchronized and deployed to the target by applying the DaaS lifecycle. Of course although, Small Data are placed under size and content control, they can be created and replicated infinitively: is this a further way to switch to Big Data again?

The following aspects would be considered:

1) Users should be enabled to destroy their Small Data in the Cloud when the service is over. This is a great feature for data security. Still, data navigation changes from infinite chains to point-to-point gaps (or better “Small to Small” Data) based upon Small Data models definition;

2) Time limit could be set or an over storage timing strategy might be defined for Small Data allocation in the Cloud;

3) Statistically speaking, by applying Small Data the average of data storing coming from intensive data applications (Social Networks, for example) should be computed and standard deviations estimated due to multiple storage allocation and, above all, free.

This doesn’t mean the YB is the last order of magnitude of data size the SI has defined. Definitively, MaaS enables service designers to plan, create and synchronize Small Data models from and to any web data source and data management system in the Cloud. Data granularity changes and enables designers to calibrate data models and data services by defining model’s content that allows users to deploy i.e. allocate and then, at the end of the service, to shred the data in the Cloud. This is a new frontier for Mobile services, Open Data and what today might be defined as Personal Cloud.

MaaS enables Personal Clouds
What is really new in the Small Data definition is the likelihood to move the data ownership from big players to users and, as a consequence, place under relation ownership and deployment location. Today data storage is almost under provider full control. Providers and storage players manage data users and data security. By applying MaaS, Personal Cloud is enabled and following here is what change:

1) Integrity defined into MaaS at the Small Data level is maintained through the service. Ownership matches the data structure/dataset deployed;

2) MaaS identifies trust boundaries throughout the IT architecture. Data models are the right way to define trust boundaries and ownership to prevent unauthorized access and sharing in “Small to Small” Cloud storage and navigation;

3) MaaS enables to set location in the data model. Any mismatch could be an infringement and must be reconciled with the terms registered in the Small Data design. Point to point navigation based upon Small Data simplifies Personal Data management and maintenance: this aspect has a positive impact on data storage order and security;

4) Ownership and data location are linked by relation. Once Personal Data in the Cloud has to be deleted, the Cloud Provider should assure the data are unrecoverable. Looking at data model mapping, data has to be destroyed in the location defined in the Small Data design. Data owners know where the data has been deployed because before opening the data service in the Cloud they may accept or not the location assigned and might ask for a new storage site.
Personal Cloud has large ranges of applications:

1) Mobile services in single/multiple personal storage by single/multiple dataset in the Cloud;

2) Healthcare services as introduced, for example, in [10] and in [11];

3) Open Data services, especially when they are interfaced to the above 1) and 2) services;

4) HR services, mainly when they concern curricula content. Owners should be free to definitively cancel personal data as defined in Small Data designing;

5) Generic personal data in the Cloud regardless they are permanent or temporary stored.

Applying MaaS, Small Data can be considered on-premise services because they collect behaviours and information concerning structures (to be deployed in the Cloud), access rights, security and scaling, partitioning and evolution. In other words, Small Data ensure that the behaviour and effectiveness of the released Cloud applications can be measured and tracked to meet user’s needs. Models leverage Cloud data services to enable flexible deployment and therefore enforce Personal Data persistence, storage and geo-location policies throughout the Cloud.

Conclusion
Big Data is a consequence, Small Data is a new start. MaaS provides Best Practices and guidelines to implement Small Data and Personal Cloud ownership by starting from data modeling and DaaS lifecycle. We want to underline that Big Data as we know is a consequence of how companies have for many years used, stored and maintained data. Small Data indeed might be a new way to manage data in the Cloud. Especially when personal data are considered, Personal Cloud provides, on one hand, a preconfigured and operational data definition (for example, local information vs. cloud information) and, on the other hand, the details of how to enable provisioning and deployment of multiple storage in the Cloud. Finally, starting from “on-premise” Small Data design, Personal Cloud can be applied and users can have soon an understanding of Cloud deployment, data centre geo-locations and service constraints.

Glossary
Big Data – Collection of datasets cannot be processed (analysis, storage, capture, search, sharing, visualization …) using on-hand database management tools or traditional data processing applications. It is due to data complexity, data amount and fast growth;
EB – Exabyte, unit of information or computer storage equal to one quintillion bytes (1018 bytes);
DaaS: Database as a Service;
MaaS: Model as a Service is a trade mark (MaaS);
SI – Système International d’unités metric prefix ;
YB – Yottabyte, unit of information or computer storage equal to one septillion bytes (1024 bytes).

References
[1] N. Piscopo – ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
[2] N. Piscopo – CA ERwin® Data Modeler’s Role in the Relational Cloud
[3] D. Burbank, S. Hoberman – Data Modeling Made Simple with CA ERwin® Data Modeler r8
[4] N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
[5] N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle
[6] N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data https://cloudbestpractices.wordpress.com/2012/10/21/maas/
[7] N. Piscopo – MaaS Workshop, Awareness, Courses Syllabus
[8] N. Piscopo – DaaS Workshop, Awareness, Courses Syllabus
[9] N. Piscopo – Applying MaaS to DaaS (Database as a Service) Contracts. An introduction to the Practice https://cloudbestpractices.wordpress.com/2012/11/04/applying-maas-to-daas/
[10] N. Piscopo – MaaS applied to Healthcare – Use Case Practice, https://cloudbestpractices.wordpress.com/2012/12/10/maas-applied-to-healthcare/
[11] N. Piscopo – MaaS and UMA implementation at page 16 in Transform2:, https://cloudbestpractices.files.wordpress.com/2013/01/transform-203.pdf
[12] Agile Modeling – http://www.agilemodeling.com/

Disclamer
“MaaS implements Small Data and enables Personal Clouds” (the Document) is provided AS-IS for your informational purposes only. In no event the contains of the document will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this document or the products, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document or the products’ use or performance. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies/offices.

Webinar: Going Google Enterprise

Join my Sheepdog colleagues for a webinar on Thursday, March 21st at 12pm EST – Going Google Enterprise.

Register here

With a customer case study from Joe AbiDaoud, CIO for Hudbay Minerals, this webinar session will provide a comprehensive introduction to the Google+ Apps suite and how your enterprise can leverage their transformational capability.

Bringing Google+ To Work

google_plusIt’s often said that BYOD – Bring Your Own Device, is part of a broader trend of “Consumerization”.

In short staff are increasingly bringing their own home technologies into work, such as their smartphone, and using them for work purposes.

It’s a blended experience where they also continue to use it for personal use too, and so it highlights key issue areas like Identity. Is that “Work You” or “Personal You” when signed into those apps…?

Another interesting aspect of the trend is highlighted in this Google article,  Bringing Google+ To Work.

You can think of Google+ as the ability to implement your own “private Twitter”, where you can tweet messages easily, but with more granular control over who they are received by.

This is an example of consumerization, where corporate ‘shared learning’ takes place via typically what are considered Facebook-like consumer tools.

Google+ and Google Apps together: Even more powerful

Given the usual risk for corporate folks is of course the information privacy risk, then the fact it is achieved through Google’s unique ‘Circles’ method for controlling this, makes it a very fertile area for new IM policies.

This will be especially well demonstrated when Google themselves better intertwine the same method throughout all of their own products.

In this article – Google+ and Google Apps together, the author explores what this better integration might entail.

Always On Canadian Cloud Computing

craigmclellan-1Our next Canada Cloud Seminar is now scheduled.

On March 13th, at the downtown Toronto Business Development Centre, we will introduce ‘Always On Cloud Computing’.

Keynote speakers will include Craig McLellan, previously the CTO for Hosting.com and author of the best practice program the ‘Always On Framework’.

Read more and register here.

Getting Real with Ruby: Understanding the Benefits

By Jennifer Marsh

Jennifer Marsh is a software developer, programmer and technology writer and occasionally blogs for Rackspace Hosting.

Ruby is an advanced language for many programmers, but it’s a powerful language used to make dynamic interfaces on the web. Dynamic web hosting shouldn’t be taken lightly because security holes still exist. A good cloud web host will offer a safe environment for development while still offering scalability and usability for Ruby programming, testing and deployment.

Space for Testing and Development

Web applications can grow to several gigabytes. For newer Ruby developers, it’s helpful to have enough storage space for backups, so a backup can be made to support the deployed code changes. Ruby is an interpreted language, but a bug can still mean a lot of time and resources devoted to discovery and fixing. Instead of emergency code reviews, the developer can restore the old version of the application before troubleshooting bugs.

Support for Database or Hard Drive Restoration

In severe cases, the application corrupts the data stored in the database. A good web host will backup the database and then restore it when the site owner needs it restored. This is especially useful in emergencies when the site gets hacked or data is corrupted due to application changes or hard drive crashes. The web host should support the client, including in cases of restoring database and application backups.

Find Support for Ruby

To run Ruby, the web host must support the framework. Check with the hosting company, and verify the host allows execution of CGI files. A good way to check is to find a host that has FastCGI and specifies that it supports Ruby and Ruby on Rails. Ruby is typically supported by Linux hosts, but some Windows hosts will support Ruby. Ruby is an interpreted language like Java, so it can run on any operating system.

Ask for Shell Access

Ruby configuration can be a bit hairy to configure. If the programmer is familiar with the language, having shell access helps speed up application configuration. Not all hosts will offer shell access, but with extended or advanced service, most hosts will oblige the webmaster. Shell access gives the webmaster more control of the Ruby settings.

The most important part of a web host is customer support an up-time. Most web hosts have a contract with the client that promises a percentage of up-time. This should be around 99%, meaning the website will be up for visitors. Check with the host for contract specifics before purchasing cloud hosting for Ruby.