Agile Big Data and Many-Particle approach change Marketing and Sales effectiveness

Big data projects have broad impact on organizations. Big Data implementation overtakes
what normally Many-Particle data aggregation goodcould be considered a new way to conduct data management to business alignment. With Big Data the path from data sources to data intelligence changes drastically. The way to design and implement data intelligence definitively changed access, ingest, distil, processes and data visualization as well. Big data projects meet agile implementation, shorten the data intelligence lifecycle by increasing services capability and adequacy to fast-growing datasets, fast moving business. Accordingly, agile practice and many-particle approach minimize data entropy together with data access time cycles everywhere, preserve data security and enhance user experience to business instant realignment.

Contents
Introduction
Data Topology and Agile Big Data
The Many-Particle approach
Conclusion
Acknowledgment
References

Introduction
The way to move from today business data into Big Data intelligence could be a costly and time consuming process that could decrease the tremendous advantage of the Big Data and Cloud paradigms. Today, information is still misaligned with the business although the huge efforts of the past business intelligence projects: companies still use partial quantities of the real corporate data heritage. As a consequence, the data spectrum exploited is unpredictable and the process to align data and business is a long-term process. Agile Big Data aligns instantly data heritage and business data. Continuous data ingestion and distillation drastically reduces ETL process to run intelligence on the “big data-lake” when needed. Then, on-premise big data topology and functional data intelligence have a crucial role to meet profitability, customer affinity and fast moving business goals. This paper introduces the business case for Big Data to avoid Marketing and Sales data entropy, reduce risks and increase the likelihood of an aware and successful Big Data implementation.

Data Topology and Agile Big Data
Documenting data evolution and updating in the past could be considered a good practice in managing data. In the beginning of cloud paradigm, due to the cost cut down attraction, the practice to have a map of the company data heritage became a great benefit especially when services have to be subscribed in the cloud. Data models, a way to document the data heritage, evolved into MaaS (Model as a Service) that supports agile design and deliver of data services in the Cloud and makes the difference in planning a Big Data implementation project.

Considering data models doesn’t mean structured data only. On-premise models map data coming from structured, semi-structured and non-structured sources. Data models maps defined services topology would be moved on-premise or in the cloud. Still data models is needed for early exploration analysis and “ab-initio” services classifying parameters which define services boundaries (to personal cloud, financial parameters or healthcare positions, for example); data models (on SQL, No-SQL, Vectors or Graph structures) essentially doesn’t address the meaning the data have but identify the services’ classes before creating the data-lake. Of course, into the data-lake converge unusable data, unstructured or denormalized raw datasources as well. The more aware is the on-premise topology, the more secure and localizable is the big data usage both on-premise and in the Cloud. Further, agile MaaS approach reveals business process affected, operating requirements and stakeholders.

Big Data CorporateFig. 1 – Corporate Data-Lake and Agile Big Data approach

Accordingly, agile Big Data practice sets the link among on-premise data topologies and on-premise or in the cloud data intelligence. Topology leverages the company services asset into specific business objectives and will determine the successful user experience requirements and the proper rapid alignment with respect to the competitors.

This means that two crucial aspects have to be taken care:

  • Data is the “compass” to understand services capacity, stakeholders, culture of the organization: big data agility is based on data-driven approach. Therefore, in the incoming project setup minimize functional data behaviour. Use MaaS topology to define projects use cases data-driven. Data-driven project design defines data ingestion architecture and data landing into the data-lake and assist in understanding the best policy for continuous data feeding. Do not disregard this aspect: accurate data feeding is the core of Big Data approaches;
  • Move data analysis and functional aggregation to data intelligence applied on the data-lake. During ingestion and data landing data treatments have to be minimized Agile Big Data approach considers 2 zones: the in-memory one, based on data topology and on-premise supported by MaaS and data intelligence based on functional analysis and programming working on spare data.

Still, minimize any approach based on “ab-inizio” technology and software development. The Big Data ecosystem provides excellent platforms and MaaS agile approach helps to shift later the final technology choice/selection. Further, MaaS agile practice assists to clarify successes and failures zone and set expectations by time. This happens why when services have been set by on-premise topology then a link has been stretched among the data heritage and the data intelligence. There are no constraints between the raw data (documented or not) and the user experience that will leverage functional and business alignment. In the middle, only the data-lake exists, continuously changing and growing, continuously supplying information for the data intelligence ending.

The Many-Particle approach
Today, more of 70 percent of the world’s information is unstructured, not classified and, above all, misused: we are assisting to the greatest Marketing and Sales data myopia since they exist. Still, there is no awareness of the Big Data benefits for service and/or product companies, and again how the product’s companies can change their services based on goods production: great amount of data, exceptionally growing, high entropy, unknown correlations and limited data usage. The concept of on-premise topology introduces services as data-driven aggregation states applied to given parts of the data-lake. But this is what happens to many-particle system instability (yottabyte is 1024 byte with a binary usage of 280). Big data storages dimension near data-lake to many-particle systems. This vision destroys any traditional approach to Marketing and Sales.

If we consider the big data-lake, it contains fast moving content in order of data affinity and mass correlation. Depending upon dynamic data aggregation, data topologies may change by tuning on-premise data mapping. Consider data-lakes are mainly fed through:

– ingestion, distillation and landing from content based (datasources, datasets, operational and transactional DB’s);
– ingestion and distillation from collaborative feeding (dynamic collections of large amount of information on users’ behaviours coming from the internet, direct and/or indirect).

Collaborative ingestion can be managed as a content based as well in case of time needed to data intelligence ending has no strict constraints so to define a third method, the hybrid one.

This brief introduction tries to explain that the data-lake maps ab-initio topologies to services but also may classify more ecosystems the services are defined and applied to. Services live in the ecosystems and ecosystems depend upon data aggregation (why used, where used, how used, who uses) and just like aggregation states, big data density change dynamically. These changes are a consequence of datasources ingested, users experiences, customers behaviours, ecosystems interaction and, of course, business realignment. Marketing and Sales should change accordingly. But since data-lake may grow by 40 percent per year (in line with the estimation of the worldwide rate of information growth taking into account that unstructured data is growing 15 times faster than structured data – source IBM®), there is no way to get any (predictive) control for marketing and sales organization although data warehousing and/or sophisticated traditional data mining and analysis are in place.

Anyway, the data growth will be greater than ever in the next years and so the variance for data aggregation in the data-lake will have an exponential rising: this means many opportunities could be lost and again further marketing and sales entropy. Ab-initio topology by agile big data approach and functional programming applied to the data-lake supply the best answer for prescriptive analysis on many-particle big data systems. In fact, the data-lake allows to work on data cross-aggregation optimization, customer experience and aggregation states for services realignment with respect to the business ecosystems. Still, data-lake is an extraordinary real-time “what-if set” for prescriptive scenarios, data processing assumption and data risk propensity.

Data-Sea

Fig.2 – The Data-Lake is quickly becoming a Data-Sea with multi-particle-like data behaviour and dimension

Banking and Goods Production are 2 typical examples of Big Data agile implementation. Both are supplying services. Both are trying to align instantly and proactively offer and business changes. Banking and Financial services play a strategic role in relationship management, profitability performance to corporate groups, client companies and commercial banking networks. This is why financial applications need to be rapidly synchronized to ecosystems fluctuations states as ecosystem participants’ change everywhere their behaviour due to local and international business conditions. Functional big data paradigm working on many-particle data aggregation is prescriptive with respect to unpredictable services transition: it agilely realigns ecosystem services directions over on-premise data topologies mapping.

Goods production may tune services as a consequence of user’s experience by, for example, executing more focused and less time-consuming recommender systems. Goods production companies are in the run to provide personalized technical and commercial services, greater client loyalty and prescriptive offers starting soon when the clients interact or navigate the company website. With agile big data and many-particle approach, goods production potentially increases user similarity by data-lake massive data aggregations. Fast moving data aggregations constantly feed functional data intelligence to services realignment and topological correlations repositioning on-premise data similarities.

Two different paces, the same objective: be prescriptive, understanding “at earlier” which data aggregation state is the most proper along the data-lake instability and then contiguously realign products offer, services configuration and, consequently, keep ecosystems oversee: on-premise topology gauged on data-lake volume, data velocity and variety allows Marketing and Sales to tune on effective data aggregation to promptly adjust services to the ecosystem.

Conclusion
Client sentiment and user experience behaviour analytics allow rapid changes to product offerings or customer support which in turn enhance customer fidelity and business improvement. However data are growing exponentially and business alignment have to be provided in more decentralized environments. Agile MaaS approach based on data-driven raw volume, data velocity and variety together with on-premise services topology is a relatively low cost and light model. Topology does not influence data treatment. Data remains intact although services integrity and classification drive business, user experience and ecosystems alignment. Accordingly, agile practice and many particle approach we introduced minimize data entropy together with data access time cycles everywhere, preserve data security and enhance user experience to functional visualization realignment.

Acknowledgment
I sincerely thank Paolo La Torre for his precious feedback on contents and encouragement on publishing this paper. Paolo is working as Commercial, Technical and Compliance Project Supervisor for Big Data planning and engagement directions in finance and banking.

References
N. Piscopo, M. Cesino – Gain a strategic control point to your competitive advantage – https://www.youtube.com/watch?v=wSPKQJjIUwI
N. Piscopo – ID Consent: applying the IDaaS Maturity Framework to design and deploy interactive BYOID (Bring-Your-Own-ID) with Use Case
N. Piscopo – A high-level IDaaS metric: if and when moving ID in the Cloud
N. Piscopo – IDaaS – Verifying the ID ecosystem operational posture
N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data
N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
N. Piscopo – Applying MaaS to DaaS (Database as a Service ) Contracts. An introduction to the Practice
N. Piscopo – MaaS applied to Healthcare – Use Case Practice
N. Piscopo – ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
N. Piscopo – CA ERwin® Data Modeler’s Role in the Relational Cloud
N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle

Disclaimer – This document is provided AS-IS for your informational purposes only. In no event the contains of “Agile Big Data and Many-Particle approach change Marketing and Sales effectiveness ” will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this documentation, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document use or performance. All trademarks, trade names, service marks, figures and logos referenced herein belong to their respective companies/offices.

Advertisements

ZScaler – Enabling business beyond the corporate network

An example of a Cloud service available in Canada is ZScaler.

Based in San Jose the company operates a global network of data centres from which they provide a managed service for implementing Cloud Security best practices.

For an in-depth introduction to this capability, recommended reads are their white paper ZScaler – Why Cloud-based Security: Zscaler why_cloud_delivered_security (18-page PDF), and the corporation presentation: Zscaler Corp Overview 10.02.12 (1) (2o-page PDF).

An example of the type of scenario where this could be applied is described in our Solution Accelerator called ‘Cloud VDI for Healthcare‘. This looks at how the growing trends of BYOD – Bring Your Own Device, will manifest in Healthcare.

ZScaler can be used for key requirements like these, offering Cloud-based security monitoring of the content and applications being delivered to the device.

Local contact is through Sales Manager Doug Smith – Reach out on Linkedin here.

Reducing Risk with Encryption for Multi-Tenant Environments

English: Amazon Virtual Private Cloud diagram

One of the biggest hurdles to cloud adoption is undeniably security. In particular, public cloud services are often under scrutiny as to whether a multi-tenant environment is actually secure. Let’s face it, production virtualized environments are a newer trend, which means that security was never really an issue.

As more business critical resources become virtualized, there is an increasing need to ensure the right security controls are in place. Until recently, multi-tenant encryption solutions weren’t particularly effective. Key management being one of the key reasons for the avoidance, as the portability of VMs across multiple physical servers meant advanced encryption key requirements.

AFORE Solutions Inc., a Cloud Security and Solution Provider, recently announced the release of their CloudLink™ 2.0 with Secure Virtual Storage Appliance, the first solution that enables cloud-based DR solutions to meet key regulatory and compliance requirements . This appliance provides a storage repository that can be accessed by VMs hosted in the cloud. Most encryption is currently applied through storage gateway methods which means it is only encrypted as it is sent to the cloud. CloudLink™ Secure VSA encrypts and protects data at all times, which is particularly important in highly regulated industries. The keys are managed by the enterprise and encryption keys can be controlled through Active integration.

CloudLink™ Secure VSA has already proven itself in Amazon VPC™ (Virtual Private Cloud), VMware vCloud™ Director environments and CA AppLogic based clouds. The main reason for the success is that organizations want to take advantage of the many benefits of the cloud model. If a provider can offer compliant environments, there is an immediate advantage.

Disk encryption is one of the key security controls used in enterprises to reduce the threat of data loss. The same methodology applies to cloud environments where you need to reduce the risk of unauthorized access as much as possible. Having the ability to encrypt individual VMs means an additional (and significant) layer of security to help protect your business critical resources.

Microsoft Digital Government

With the upcoming OASIS conference being held at the Microsoft HQ in Washington DC, and my first session being focused on “Digital Government”, it’s helpful to showcase what their offering is.

Cloud Connected Government

Digital Government is the headline program recently announced by the new Whitehouse CIO, Steven VanRoekel.

It describes a number of technical domains that can be harnessed to enable the public sector to better drive more innovation, and these domains are nicely explained by looking at what Microsoft offers in these areas.

These are illustrated through their white paper developed with one of their strategist partners CSTransform – The Connected Government Framework (83-page PDF), which highlights key points such as:

  • Connected Government Reference Architecture – The backbone of this program is a technical architecture for integrating legacy on-premise applications and new Cloud applications, where an Enterprise Service Bus can better unite data from across these different environments.
  • Cloud Identity – Via a system of ‘Relying Parties’ and open standards for Identity like OpenID, citizens can leverage existing log-on systems like Windows Live to streamline how quickly and easily they can access these new apps.
  • Crowdsourcing and Social Media – The Azure-based Town Hall app enables public sector agencies to utilize Crowdsourcing models for better citizen engagement, and they also make use of public social media like Facebook, that is blended into the web site portal.
  • Integrated Cloud BPM – These social interactions trigger various business processes, and agencies can use a mix of Cloud apps, those on Azure as well as Office 365, to empower staff with the full suite of collaboration tools needed to fulfill these workflows.

Cloud Best Practices

The critical aspect is the ability to leverage the Cloud to share best practices.

In the paper they showcase examples like the ‘Virtual Rucksack’, an app that utilizes all of the common components described above to provide a new, very helpful service for homeless people in Birmingham, UK.

It’s highly likely that there are homeless in Birmingham, USA as well, amongst many other locations throughout the world, and for each location to ‘reinvent the wheel’ and develop their own app for this need is a highly inefficient approach inherent to traditional IT.

In contrast the Cloud will provide a framework for global collaboration, where public sector agencies can learn about what programs are successful in these areas, and simply ‘Download and Run’ the apps that make this possible.

Private Cloud Agility : ServiceMesh

At the recent WPC12 in Toronto, Microsoft laid out more of their battle plans for the Enterprise Cloud Computing market.

Naturally they are aggressively positioning Hyper-V as the platform of choice for the large corporate looking to roll out Cloud internally.

At this Private Cloud web site they provide white papers and other resources that make some challenging comparisons with VMware.

ServiceMesh

This trend highlights the context for one of our keynote Technology Strategy Board members: ServiceMesh.

ServiceMesh founder and CEO Eric Pulier is one of our global leadership team, helping us define the role of their technologies and how it enables Enterprise Cloud Computing.

Fundamentally this can be defined as ‘Private Cloud Agility’, one of our headline themes for this best practice.

Private Clouds are achieved through products like Hyper-V or VMware, and improving agility is one of their primary benefits, achieved through self-service automation and other key features.

However these base products aren’t everything you need to fully maximize these benefits by adopting a number of key architecture principles, and so ServiceMesh adds a lot of extra value.

Integrating with System Centre, they provide a variety of additional capability for managing VM workloads across internal and external Clouds, such as:

  • Agile Operating Model: Their ‘Agility Platform’ enables an agile set of operations, like self-service and automated orchestration in such a way to improve the enterprise maturity model.
  • Cloud-bursting: This can facilitate VM provisioning across internal Private Clouds, like Hyper-V, as well as external Cloud providers like Amazon aka “Cloud-bursting”.
  • VM template driven – Build stacks to enable portability of your apps across these multiple Clouds.
  • Define meta-data for apps – Define and apply meta-data to applications that can govern their deployment across Clouds, applying policies that control security, compliance, auto-scaling and so forth. This can even include , and also enables building a ‘single system of record’ based on this meta-data.

These policy attributes can even include IT helpdesk policies, and the toolset used for functions like Live Migration of VMs for enhanced BCP and portability.

The Cloud as AI Singularity – Introducing Ideate

Dave Duggal is the Founder and CEO of Consilience International, a NYC-based technology firm creating the Ideate framework.

We connected on a Linkedin thread when I posted a link to an old article of mine I had stumbled across, one where I suggested what was considered the Cloud then would ultimately evolve to become the technological Singularity.

Obviously this is a very futurist concept but it has immediate practical value when we consider the core mechanics involved.

Dave captured the critical piece in our chat:

Most Platform as a Service providers are just taking the same old middleware stack approches we found on-premise and throwing it on a virtualized server environment and saying ‘voila – cloud!’, but it is just causing a new generation of silo’d apps with yet another layer of indirection and complexity.

Absolutely. Really we are still in a technology phase where the Cloud is only a ‘version 1.5’, i.e. we are simply shifting an old concept of IT to a hosted model.

For it to leap to a version 2.0, 3.0 and beyond, our fundamental concepts of software and enterprise architecture will require changing and reflect an entire step-change from in-house data centres as the primary mode of IT, to one of a singular, interconnected environment.

The fundamental shift in software terms is to move away from interconnection via ‘hard-coded’ links between the applications and instead utilize smart agent, semantic web approaches.

Dave is pioneering such an evolution. He says:

What makes our approach different is that we’ve fused early Declarative programming ideas with REST architecture concepts for a “FunctionalWeb”. A 100% loosely-coupled environment where all binding is logical and dynamic. The technical details are somewhat esoteric, but building, governing and adapting composite apps from distributed and diverse applications resources is easy.

For more in-depth info on his technology, check out this Worldwide Web Conference / WS-REST workshop presentation and this InfoQ article.

Certifying the Cloud for eGov class services

The general purpose nature and widespread value of Guardtime can be seen through how the  Keyless Signatures process can be applied in lots of different ways but all for the same fundamental reason: Audit trail.

It’s always this part that’s more key than any other because we’re not actually concerned with IT security, we’re really concerned with who can be blamed if it goes wrong… 🙂

Guardtime’s technology can be applied to a document where it can guarantee it hasn’t been tampered with, the fundamental element of a legally admissable document, and the same technique can be applied to Cloud systems too, like the log files or the VM images you use to run software.

The context for this can be best explained by how the external market would define the need for such a service, conveyed through a focus on GovCloud – Government Cloud Computing.

One of the best snapshots of the current state of GovCloud can be seen in the post-review analysis document about last year’s 2011 International Cloud Symposium, which we partnered with OASIS to help organize in the UK.

John Borras (OASIS eGov) and John Sabo (OASIS IDTrust) do a really great job of capturing the event, which brought together thought leaders and government officials from across the EU to define and discuss what constitutes GovCloud, and one of the critical points he notes is:

The technical challenges are also not new but they are of a lower order of importance in so far as information itself (and finding workable information governance  and risk management policies) is more important than underlying Cloud computing technologies.

and for this

“Auditability is an absolute requirement”

Guardtime offers one of the building blocks for achieving this type of Auditability, which can be applied not just for the benefit of the IT administrators and Cloud providers, but then also to the higher layers of the applications using them too, ie. the various eGov business process like drivers licence renewals et al.

The thing is that while we’re still in the mode of having the debate about whether the Cloud is less secure than on-prem systems, we’re actually not that far from the point where not only will they be higher class environments but we’ll be able to legally prove it too. Then the tidal wave will truly be unleashed…

I’m excited to say we’ve begun planning the second event, this time for Washington DC, on the same dates of Oct 10-12 so be sure to add that to your diary, it’s going to be a powerhouse event for sure. We’ll be covering the advances of these topics.

Colligo – Offline Cloud 365 replication

Our primary activity here is matching vendors and other innovations to the best practice standards defined for Cloud Computing by organizations like NIST.

For example one of the key areas that NIST highlights is offline replication. In short when you find yourself in a location with no wireless Internet access, what use are your Cloud-based applications then?

While the Cloud is epitomized by the concept of a centrally hosted resource, this fundamental reality of Internet computing means that ultimately the Cloud will evolve a la Internet, via more of a distributed computing model where local and central collaborate together for ‘the best of both worlds’ so to speak.

The specific NIST term is:

“8.1.2 Off-line Data Synchronization – Access to documents stored in clouds is problematic when subscribers do not have network connectivity. The ability to synchronize documents and process data, while the subscriber is offline and with documents stored in a cloud, is desirable, especially for SaaS clouds. Accomplishing such synchronization may require version control, group collaboration, and other synchronization capabilities within a cloud.”

Vendors provide good implementation examples of these principles, and so the two go hand in hand.

For example one toolset for the Microsoft Cloud world is Colligo. They provide client software that enables you to work on documents etc while offline, which are then synchronized with Cloud services like Office 365 when you are connected.

To the user this is the best of both worlds, i.e. their experience is that they are one with the Cloud, where their files are persistently available online from anywhere, but with the kicker that they’re local too and can be worked on even in the event of network downtime, or -hushed silence- heaven forbid, Cloud downtime.

Hey it happens and the best way to go with it is a technical architecture that is tolerant of both network and Cloud failure, and hence et voila, why tools like Colligo are a key areas for CIO to master. Big user productivity boost from safe Cloud adoption is a good box to tick.

Infopedia – Microsoft Salesforce 2.0

One of the most powerful IT case studies I have read recently was the Microsoft one for their own in-house `Salesforce 2.0′ platform, what they call Infopedia.

As detailed in the full white paper on this topic of Cloud 2.0, when it is applied to the sales activities of an organization it`s called `Salesforce 2.0`.

A powerful focus area as you can imagine, and so given one of the flagship examples of how and why to do it is Microsoft`s own adoption of their own own new products, is a compelling marketing scenario for sure.

Get Sharepoint FAST!

Critically what`s really important isn`t actually specific to Sales, although they are a great example. Instead Microsoft is addressing a common issue that all users of Sharepoint have, they’ve written a recipe for a remedy out of a situation that many others will themselves be stuck in, right now.

As described in the white paper basically this situation is caused through Sharepoint being a victim of its own success. It’s been wildly popular but then also wildly used, with little regard for style or technical consistency patterns across sites, or any useful tagging within sites for simple navigation.

The end result is file chaos, and given these sites directly support the sales process, providing sales collateral like brochures to sales teams when they need them. However if it’s buried n levels deep in yet a.n.other Sharepoint site, it’s lost to the chaos.

Microsoft themselves ended up in this position and have addressed it through creating a new architecture using their recently acquired FAST Search engine.

Search is simpler because it can do a much better job of indexing content, and can process vast swathes of it, from all kinds of sources like documents, enterprise applications and even XML feeds. Users are better just submitting their content directly to Search rather than uploading and indexing documents.

Therefore I think it’s worthwhile to recognize and promote this new category of Sharepoint FAST, i.e. with FAST embedded and activated. It’s an entirely new paradigm for Sharepoint users and developers, and given the impact it offers like this very quick, very practical ROI for the sales force, it’s going to be a dynamite segment for the industry.

UCaaS : Foundation for Social Business Architecture

Ontario Province has recently become more proactive in asking the local industry for innovative new Cloud solution type models, such as UCaaS – Unified Communications as a Service.

This provides the ideal stimulus for development of the local Cloud supplier base, and to identify where the opportunities are for the additional innovation that Ontario is looking for, we can look at a number of different aspects of the Cloud industry.

Cloud and Social Business

The first and most important of these is ‘Social Business Architecture‘ – How to design applications to become more integrated with the world of social media.

While we often talk of the big industry battle taking place around Cloud as the main theme, it’s actually the field of ‘Social Business’ where key players stand to win or lose mind-share and therefore market share.

IBM, Microsoft, Cisco and Salesforce.com are all staking major bets, they all have major product lines, in the field of ‘Social Collaboration’, and it’s these cool and sexy features that are going to act as the catalyst for inspiring end-user demand that drives adoption of the underlying Cloud services.

For example the Microsoft offering is the combination of their Sharepoint collaboration software with their Unified Communications product set.

They also have groovy little tools like the `Social Connector‘, to integrate your social networks like Linkedin directly into your email.

Social Business Architecture

To gain a quick overview for IBM and also a good introduction into the open standards that define a common set of Social Business best practices, check out this IBM paper – Technical Strategy for Social Business (9-page PDF).

This provides a good overview of key technologies and standards that make define Social Business, which the vendors then implement.

For example for ‘Activity Streams‘ you can see these implemented in:

  • Salesforce.com Chatter – This provides the core Facebook-type interactivity between colleagues that is the foundation for Social Business.
  • For Microsoft their Sharepoint toolset offers a plethora of similar CMS-based features, like this one, and also highlighting a key point it’s also built into their UC client too, linked to Sharepoint. You can see a demo of it in this video, at sequence 1.35 mins.

There are many other features too, and this is an older video so there’s likely more new stuff on top, what is sitting on top of the software-approach that enables an organization to rid itself of old PBX equipment. A key point Ontario makes in their news article.

The key dynamic is this interplay between UC and Sharepoint, because the dynamism of the ‘find an expert’ search process that the demonstrate the UC client can be used for, is linked to how effectively Sharepoint is populated with the required data.

Cloud 2.0

This is not as easy as it sounds, indeed poor user adoption of these features is still a basic but show-stopping challenge that many adopters of these applications are dealing with.

Sharepoint is a wildly popular app but equally it’s wildly used, in terms of a lack of central standards or any uniformity in records indexing, resulting in ‘file chaos’.

How to address them, by using other tools like the ‘FAST’ Search engine they recently acquired, is the headline theme of our last white paper, Microsoft Cloud 2.0.

For example here is another video demonstrating how Sharepoint can be tailored for the same expert search process, through utilizing FAST.