MaaS implements Small Data and enables Personal Clouds

Abstract – MaaS (Model as a Service) sets a new concept to order and classify data modeling design and deployment to the Cloud. MaaS changes the way to move data to the Cloud because allows to define data taxonomy, size and contents. Starting from data model design, MaaS might guide the DaaS (Database as a Service) lifecycle, providing data granularity and duty rules: as a consequence, MaaS implements the new concept of Small Data.big-data_1

In fact, Small Data answers to the need of controlling “on-premise” data dimension and granularity. Anyway, Small Data is not data volume limitation. Small Data affords data modeling full configuration and provides 2 main advantages: data model scale and data ownership that provide assigned data deployment and, finally, data deletion in the Cloud.

Introduction

The inheritance coming from the past imposes to manage big data as a consequence of multiple integration and aggregation of data systems and data movement. Data coming from Social Networks intensive data applications have contributed to blow up the EB containers.

Administrating big data is not an option but rather a debt contracted above all with the data management history. Anyway, Big Data analytics seems to be a standard practice in storing massive data. Is there any way to change this norm? Companies have for many years used data models by designing and mapping data, started the change and today manage the “tiller” of their data heritage.

Accordingly, Small Data is far from a further catch-phrase: the antonyms with respect Big Data aims to pay attention in changing mind and using data models to design fit, noticeable data systems, especially when data are moved to the Cloud. MaaS implements Small Data, enables Personal Cloud and help to recover, to order and to classify, data inheritance of the past as well.

Why MaaS defines and implements Small Data
MaaS meets the ever-increasing need for data modeling and provides a solution to satisfy continuity of data design and application. Further, it helps in choosing and defining architectures, properties and services assets looking at the possible evolution and changes the data service can have. MaaS meets scaling and dimensioning in designing data systems, supports scalability and, in particular, agile data modeling appliance. Small Data starts when the dimension of the data system has to be defined and controlled since the description (metadata). In effect, we are improperly speaking of data systems: actually, we are dealing with data services. The Cloud is the great supplier of data services.

Since MaaS allows defining “on-premise” data design requirements, data topology, performance, placement and deployment, models themselves are the services mapped in the Cloud. In fact, data models allow to “on-premise” verify how and where data has to be designed to meet the Cloud service’s requisites. In practice, MaaS enables in designing the data storage model. The model should enable query processing directly against databases to strengthen privacy levels and secure changes from database providers;

– Modeling data to calculate “a priori” physical resources allocation. How many resources does the service need? Do database partitions influence resource allocation and/or replication? Modeling the data means designing the service; calculating a priori these magnitudes drives both deployment and database growth;

– Modeling data to predict usage “early” and to optimize database handling. Performance and availability are two of the main goals promised by the Cloud. Usage is not directly dependent upon the infrastructure and, as a consequence, could be a constraint. Calculating the usage rate means understanding the data application life cycle and then optimizing the data service properties;

– Designing multi-data structures to control databases elasticity and scalability. Models contain deployment properties and map the target database service. Therefore, the model designs “on-premise” database elasticity and scalability. Still, we will see later that multi-database design is a way to control data persistence.

Thus, imagine we have users asking for temporary services to be deployed and, after services have been closed, cancelled. Services satisfying these requirements are based upon data models designed by MaaS agile model techniques, controlled size and contents, fast update, rapid testing and continuous improvement forward the generation of the model/dataset to the target database. Models should set users’ data contents, dimension (for example, to suit to mobile services), data deployment (geo-location, timing …) and to be, on-demand, definitively destroyed.

With MaaS, Small Data can be defined since the metamodel design and then implemented, synchronized and deployed to the target by applying the DaaS lifecycle. Of course although, Small Data are placed under size and content control, they can be created and replicated infinitively: is this a further way to switch to Big Data again?

The following aspects would be considered:

1) Users should be enabled to destroy their Small Data in the Cloud when the service is over. This is a great feature for data security. Still, data navigation changes from infinite chains to point-to-point gaps (or better “Small to Small” Data) based upon Small Data models definition;

2) Time limit could be set or an over storage timing strategy might be defined for Small Data allocation in the Cloud;

3) Statistically speaking, by applying Small Data the average of data storing coming from intensive data applications (Social Networks, for example) should be computed and standard deviations estimated due to multiple storage allocation and, above all, free.

This doesn’t mean the YB is the last order of magnitude of data size the SI has defined. Definitively, MaaS enables service designers to plan, create and synchronize Small Data models from and to any web data source and data management system in the Cloud. Data granularity changes and enables designers to calibrate data models and data services by defining model’s content that allows users to deploy i.e. allocate and then, at the end of the service, to shred the data in the Cloud. This is a new frontier for Mobile services, Open Data and what today might be defined as Personal Cloud.

MaaS enables Personal Clouds
What is really new in the Small Data definition is the likelihood to move the data ownership from big players to users and, as a consequence, place under relation ownership and deployment location. Today data storage is almost under provider full control. Providers and storage players manage data users and data security. By applying MaaS, Personal Cloud is enabled and following here is what change:

1) Integrity defined into MaaS at the Small Data level is maintained through the service. Ownership matches the data structure/dataset deployed;

2) MaaS identifies trust boundaries throughout the IT architecture. Data models are the right way to define trust boundaries and ownership to prevent unauthorized access and sharing in “Small to Small” Cloud storage and navigation;

3) MaaS enables to set location in the data model. Any mismatch could be an infringement and must be reconciled with the terms registered in the Small Data design. Point to point navigation based upon Small Data simplifies Personal Data management and maintenance: this aspect has a positive impact on data storage order and security;

4) Ownership and data location are linked by relation. Once Personal Data in the Cloud has to be deleted, the Cloud Provider should assure the data are unrecoverable. Looking at data model mapping, data has to be destroyed in the location defined in the Small Data design. Data owners know where the data has been deployed because before opening the data service in the Cloud they may accept or not the location assigned and might ask for a new storage site.
Personal Cloud has large ranges of applications:

1) Mobile services in single/multiple personal storage by single/multiple dataset in the Cloud;

2) Healthcare services as introduced, for example, in [10] and in [11];

3) Open Data services, especially when they are interfaced to the above 1) and 2) services;

4) HR services, mainly when they concern curricula content. Owners should be free to definitively cancel personal data as defined in Small Data designing;

5) Generic personal data in the Cloud regardless they are permanent or temporary stored.

Applying MaaS, Small Data can be considered on-premise services because they collect behaviours and information concerning structures (to be deployed in the Cloud), access rights, security and scaling, partitioning and evolution. In other words, Small Data ensure that the behaviour and effectiveness of the released Cloud applications can be measured and tracked to meet user’s needs. Models leverage Cloud data services to enable flexible deployment and therefore enforce Personal Data persistence, storage and geo-location policies throughout the Cloud.

Conclusion
Big Data is a consequence, Small Data is a new start. MaaS provides Best Practices and guidelines to implement Small Data and Personal Cloud ownership by starting from data modeling and DaaS lifecycle. We want to underline that Big Data as we know is a consequence of how companies have for many years used, stored and maintained data. Small Data indeed might be a new way to manage data in the Cloud. Especially when personal data are considered, Personal Cloud provides, on one hand, a preconfigured and operational data definition (for example, local information vs. cloud information) and, on the other hand, the details of how to enable provisioning and deployment of multiple storage in the Cloud. Finally, starting from “on-premise” Small Data design, Personal Cloud can be applied and users can have soon an understanding of Cloud deployment, data centre geo-locations and service constraints.

Glossary
Big Data – Collection of datasets cannot be processed (analysis, storage, capture, search, sharing, visualization …) using on-hand database management tools or traditional data processing applications. It is due to data complexity, data amount and fast growth;
EB – Exabyte, unit of information or computer storage equal to one quintillion bytes (1018 bytes);
DaaS: Database as a Service;
MaaS: Model as a Service is a trade mark (MaaS);
SI – Système International d’unités metric prefix ;
YB – Yottabyte, unit of information or computer storage equal to one septillion bytes (1024 bytes).

References
[1] N. Piscopo – ERwin® in the Cloud: How Data Modeling Supports Database as a Service (DaaS) Implementations
[2] N. Piscopo – CA ERwin® Data Modeler’s Role in the Relational Cloud
[3] D. Burbank, S. Hoberman – Data Modeling Made Simple with CA ERwin® Data Modeler r8
[4] N. Piscopo – Best Practices for Moving to the Cloud using Data Models in the DaaS Life Cycle
[5] N. Piscopo – Using CA ERwin® Data Modeler and Microsoft SQL Azure to Move Data to the Cloud within the DaaS Life Cycle
[6] N. Piscopo – MaaS (Model as a Service) is the emerging solution to design, map, integrate and publish Open Data https://cloudbestpractices.wordpress.com/2012/10/21/maas/
[7] N. Piscopo – MaaS Workshop, Awareness, Courses Syllabus
[8] N. Piscopo – DaaS Workshop, Awareness, Courses Syllabus
[9] N. Piscopo – Applying MaaS to DaaS (Database as a Service) Contracts. An introduction to the Practice https://cloudbestpractices.wordpress.com/2012/11/04/applying-maas-to-daas/
[10] N. Piscopo – MaaS applied to Healthcare – Use Case Practice, https://cloudbestpractices.wordpress.com/2012/12/10/maas-applied-to-healthcare/
[11] N. Piscopo – MaaS and UMA implementation at page 16 in Transform2:, https://cloudbestpractices.files.wordpress.com/2013/01/transform-203.pdf
[12] Agile Modeling – http://www.agilemodeling.com/

Disclamer
“MaaS implements Small Data and enables Personal Clouds” (the Document) is provided AS-IS for your informational purposes only. In no event the contains of the document will be liable to any party for direct, indirect, special, incidental, economical (including lost business profits, business interruption, loss or damage of data, and the like) or consequential damages, without limitations, arising out of the use or inability to use this document or the products, regardless of the form of action, whether in contract, tort (including negligence), breach of warranty, or otherwise, even if an advise of the possibility of such damages there exists. Specifically, it is disclaimed any warranties, including, but not limited to, the express or implied warranties of merchantability, fitness for a particular purpose and non-infringement, regarding this document or the products’ use or performance. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies/offices.

Advertisements

Social CRM+ : How to master Linkedin, Salesforce.com AND Google+

google_plusFor any modern entrepreneur I`d suggest the sweet spot of Cloud applications you should master is the holy trinity of these three killer apps – Linkedin, Salesforce.com and Google+ Apps.

In short these cater for the end-to-end requirements of a sales operation, so you can begin selling and closing deals.

Establishing successful sales teams is naturally a milestone you want to put in place as quickly as possible, and so having the tools as equally as quick as possible is one of the primary benefits of on demand IT.

Getting these three running gets you out knocking on doors, capturing prospect contact details and submitting initial proposals and even hooplah customer contracts.

As things evolve you can add everything else: E-contracting for speeding up return of those contracts, better e-marketing automation and social media publishing. These are also staples but the first three gets your beachhead established.

Google+ Apps

You probably already have a good understanding of the value of Linkedin and Salesforce.com – One is a public market where you meet every one and network to find and work with contacts, and the other where you record the details of those transactions to better manage the overall process.

The most interesting point to note about the Google proposition is that i) there are many different component parts, and ii) this includes both a public market as well as internal automation tools.

By this I mean Google+ is a social community akin to Linkedin, you can share links into groups on a broader, global basis, but then you can also add different levels of privacy settings so that these are restricted to various groups of business contacts, eg. your ‘Office Team’ or `Sales Network`.

Google+ calls these “Circles” and it`s a very quick and slick way of managing that aspect of information privacy. In short it`s like your own “private Twitter” where you only send Some Type of tweets to a controlled group of Some Type of people.

It also then bridges in Google Hangouts, the uber-slick desktop videoconferencing app, so that this flow of information can evolve into more in-depth dialogue as and when needed.

With the bread and butter of Google Apps then being the office productivity suite (Word type editor, spreadsheets, etc.) as well as email, calendaring et al, you can see how they cater for the remaining part of the spectrum where you need this mix of desktop tools to work on writing client proposals for the contacts you meet.

 If you are interested in learning more on this topic, check out our Small Business eBook, and if you`d like to see it hands on in action, check out our seminar series.

The next scheduled event is Municipality as a Service. This will showcase Google+ Apps for the towns and cities of Canada.

Getting Real with Ruby: Understanding the Benefits

By Jennifer Marsh

Jennifer Marsh is a software developer, programmer and technology writer and occasionally blogs for Rackspace Hosting.

Ruby is an advanced language for many programmers, but it’s a powerful language used to make dynamic interfaces on the web. Dynamic web hosting shouldn’t be taken lightly because security holes still exist. A good cloud web host will offer a safe environment for development while still offering scalability and usability for Ruby programming, testing and deployment.

Space for Testing and Development

Web applications can grow to several gigabytes. For newer Ruby developers, it’s helpful to have enough storage space for backups, so a backup can be made to support the deployed code changes. Ruby is an interpreted language, but a bug can still mean a lot of time and resources devoted to discovery and fixing. Instead of emergency code reviews, the developer can restore the old version of the application before troubleshooting bugs.

Support for Database or Hard Drive Restoration

In severe cases, the application corrupts the data stored in the database. A good web host will backup the database and then restore it when the site owner needs it restored. This is especially useful in emergencies when the site gets hacked or data is corrupted due to application changes or hard drive crashes. The web host should support the client, including in cases of restoring database and application backups.

Find Support for Ruby

To run Ruby, the web host must support the framework. Check with the hosting company, and verify the host allows execution of CGI files. A good way to check is to find a host that has FastCGI and specifies that it supports Ruby and Ruby on Rails. Ruby is typically supported by Linux hosts, but some Windows hosts will support Ruby. Ruby is an interpreted language like Java, so it can run on any operating system.

Ask for Shell Access

Ruby configuration can be a bit hairy to configure. If the programmer is familiar with the language, having shell access helps speed up application configuration. Not all hosts will offer shell access, but with extended or advanced service, most hosts will oblige the webmaster. Shell access gives the webmaster more control of the Ruby settings.

The most important part of a web host is customer support an up-time. Most web hosts have a contract with the client that promises a percentage of up-time. This should be around 99%, meaning the website will be up for visitors. Check with the host for contract specifics before purchasing cloud hosting for Ruby.

Why Cloud Servers is the choice for Windows VPS

By Jennifer Marsh

Jennifer Marsh is a software developer, programmer and technology writer and occasionally blogs for Rackspace Hosting.

Businesses basically have two choices for operating systems when shopping around for cloud servers: Windows or Linux. While Linux is cheaper and runs on many enterprise servers, businesses that run internal applications for a Windows desktop can benefit from Windows cloud servers.

The IT department and users will understand the platform more easily than were they to learn Linux. But launching an intuitive platform is only one of the advantages of cloud servers in a Windows environment.

Multiplatform Support

Business that have been online for several years probably have some legacy code in use in various departments. Fortunately, cloud servers can support multiple platforms for businesses moving towards a Windows platform.

Integration with Microsoft Azure

The latest Windows Server 2012 includes a cloud feature called Azure. Azure gives businesses the tools to create platforms as a service (PaaS) and integrates cloud server technology within an internal network. To take advantage of the Azure service, the business must setup a cloud hosting environment. Azure is more easily integrated with a corresponding Windows cloud host. The IT manager can use Microsoft’s wizard to install and configure the Azure server for cloud hosting.

More Cost Efficient for Support

Because most IT infrastructures have a lot of moving parts, system errors, downtime and desktop support can be expensive, especially when hosted internally. Having onsite personnel for any company can be expensive, and too little support can cost the company money. Hosting Windows services in the cloud eliminates much of the cost of having onsite support staff available seven days a week, 24×7. Check the contract for specifics before signing up for any particular service.

Additionally, hosting in the cloud means the company only pays for the bandwidth and server resources used each month and not a flat fee amount. Any cloud host charging a flat fee is in fact not a true cloud host. By paying for only what is used, businesses can cut down on IT infrastructure costs. As the business grows and more revenue is brought in, the cloud costs will also grow, but these costs only grow with the business’ success.

Extending on premise applications to the cloud

When you have an existing system, and you have a need to deploy a mobile app that won’t interfere with what’s already in place, the cloud can offer a great solution if managed correctly. With so many services on offer, there are many problems that cloud can be considered as a solution for, especially in projects where ” no one size fits all”. Such was the case for a recent cloud use case in mobile app development, as told in this SlideShare.

This presentation was given at the Amazon Web Services User Group UK meetup on 15th May, in London, England; it was written and delivered by Intechnica’s Technical Director Andy Still.

Read more blogs about cloud, development and application performance from Intechnica

Designing Applications for the Cloud

When designing applications for the cloud, or extending on-premise applications into the cloud, it should go without saying that you can’t just deploy and expect good results. There is much to consider from the very beginning as it relates to using cloud platforms fin development; these include scaling out, taking new and imaginative approaches to data storage, making full use of the wide range of products and services on offer from cloud providers (beyond hosting), and exploring the many flavours of hybrid solution which can mean all types of business can leverage the benefits of the cloud. These details are laid out further in the following SlideShare presentation.

“Architecting for the Cloud” is the theme for the upcoming Amazon Web Services User Group UK meetup (15th May, London). Intechnica’s Technical Director Andy Still will be there, and plans to talk about extending an application to create a caching platform for mobile access within AWS. If you’re in the London area this is definitely worth coming along to for the discussions and networking around AWS and cloud computing.

Read more blogs about cloud, development and application performance from Intechnica

Cloud Computing Use Case: Development & Test Environments

In a recent article “Put Your Test Lab In The Cloud”, InformationWeek outlined the pros, cons and considerations you must take into account when talking about hosting test labs in the cloud. Using the cloud for this purpose is not necessarily a new idea, and it’s one that certainly makes a lot of sense; Replication of test results depends upon consistency across all variables, and putting a test lab in the cloud allows you to do that from anywhere or for anyone who needs to use it.

Indeed, the use of private or public cloud services, like Amazon Web Services, as a platform for software development & testing, is common practice for some businesses already. The benefits of using the cloud for this include the general positives of cloud, such as cost savings (in terms of the lack of start up cost as well as hardware upgrades, maintenance etc. coming out of the equation), but also extend to specific benefits, like increased control over projects, quick duplication of environments (especially when compared to “tin” set ups), speed of deployment, ease of collaboration, and the ability for testers and developers to access environments on demand, removing a barrier to efficiency. It’s not hard to see why the practice it growing in popularity along with other cloud services.

To best understand the benefits of cloud computing in software development and test environments, it’s useful to see the process in action. We recently hosted a webinar showing the process in detail, from configuring a template for the environment, to launching and connecting remotely to the machine image. In our example, we used Amazon Web Services with a custom management tool, but the process is fairly standard.

It’s important to note that different considerations need to be made for each cloud service provider, especially when weighing up public and private cloud offerings. Obviously, it’s faster and easier to get started with a public cloud, but it can be harder to manage costs, and some would consider a layer of control to be lost. On the other hand, private clouds are costly and time consuming to set up in comparison, and it’s a much bigger consideration to justify.

DynamoDB up to the usual high standards of AWS tools: A NoSQL comparison

This blog post was originally written by Intechnica’s Technical Director Andy Still.

With the pace of new product releases on Amazon AWS it is often not possible to get more than a quick look at any new development. One recent development that has really caught my interest however is the new DynamoDB offering.

Essentially this is Amazon’s offering for the highly scalable NoSQL database market.

Like everything else in AWS it is charged on a pay by use basis, however you have to specify in advance the level of read/write access you will require and Amazon will set aside that level of capacity guaranteed for you. This value can be set via the console or in real time via the API. To be honest I found the pricing model a bit confusing, I wasn’t 100% sure what happened if you fell above or below the levels you set or how the actually billing rate was calculated. Looking at the real time billing information in my account didn’t clear this up. I think if I decide to use DynamoDb in any sort of anger I will need to contact Amazon for clarification.

Most of my NoSQL experience has been using Azure Table Services so my initial reaction is to compare the two.

In essence they work in a similar fashion. They are large key/value datastores where defined tables allow for the storage of entities containing an arbitrary amount of attribute/value combinations that are stored against a key – called a hash key. Keys can be grouped in ranges, identified by a range key. If a range key is specified then the hash key/range key combination must be unique for that table, otherwise all hash keys for a table must be unique. For those familiar with Table Services, hash key is the equivalent of primary key, range key is partition key – however range key is optional in DynamoDb whereas partition key is compulsory in Table Services.

That’s about as much as detail as I want to go into about the actual service itself. For more details the Amazon literature is here: http://aws.amazon.com/dynamodb/

Let’s get instead into a bit of actual usage of this, particularly the AWS .net SDK interface into it. All API calls can be made direct via JSON based REST calls but I’ve been testing using the SDK direct from .net.

First step is to install the AWS Visual Studio Toolkit (http://aws.amazon.com/visualstudio/). This will install all the SDKs for the various AWS services, including Dynamo DB, as well as the AWS toolbar for Visual Studio and a new AWS project type that comes with the correct references already added.

The AWS toolbar is a useful tool giving you access to control some AWS services from within Visual Studio, including creating DynamoDb tables and viewing/editing their content.

A sample of the DynamoDB schema. Image: aws.amazon.com

I used the toolbar to create some initial test tables. A few points to mention about the creation procedure:

  1. At creation time you need to specify whether the table with have a range key or not. This cannot be changed later. This struck me a potential issue if you incorrectly judge how much a table will grow to and want to introduce ranging later on.
  2. At creation time you also have to specify the level of read/writes per second you want assigning to this table. As I mentioned earlier I found this slightly off-putting, primarily because I actually had no idea. As this was a test table it may require a few reads/writes per second very infrequently but I had to commit to a permanent fixed allocation (this can be changed at any point).
  3. Amazon doesn’t currently offer an offline version for development so any testing has to be done on the live AWS service. This could result in running up bills if doing full production scale tests. AWS does offer a generous free tier for DynamoDB but it is always something to be aware of when using AWS.

Having got my tables created I could now start looking at how to enter data into them. The system I was looking to build in DynamoDB was a remote offshoot of an existing system that held a read only subset of the data. All data would be pulled from REST API calls as JSON objects and stored within AWS until read and cached in memory.

DynamoDB sounded like a good candidate for this task as I thought I could just try and read the JSON objects into .net objects and store them direct in DynamoDB.

The first API example I looked at (http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/LoadData_dotNET.html) used a document object, assigned attributes to it then pushed it to the remote table

 Amazon.DynamoDB.DocumentModel.Table replyTable = Amazon.DynamoDB.DocumentModel.Table.LoadTable(client, "Reply");

 

var thread1Reply1 = new Document();

thread1Reply1["Id"] = "Amazon DynamoDB#DynamoDB Thread 1"; // Hash attribute.

thread1Reply1["ReplyDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(21, 0, 0, 0)); // Range attribute.

thread1Reply1["Message"] = "DynamoDB Thread 1 Reply 1 text";

thread1Reply1["PostedBy"] = "User A";

 

replyTable.PutItem(thread1Reply1);

Although this syntax is pretty straight forward I didn’t really like the look of it. To convert my existing objects would be either a manual process including a lot of magic strings or an overly complex process of mapping data attributes to document attributes.

Luckily AWS have provided a solution and implemented the second solution for you. They have created a number of attributes that you can decorate your class with to define it as a DynamoDB class. These can then be read directly to and from the db.

The minimum attributes you need to define on a table are:

[DynamoDBTable("TableName")]

public class ClassName

{

[DynamoDBHashKey]   // hash key

public int Id { get; set; }

This will then save all data to the table TableName with the hash key of Id. You can also use the DynamoDbProperty attribute to define that it should be read to and from the database and even use DynamoDbProperty(“dbFieldName”) to associate it with differently named attributes in the database table. DynamoDbIgnore will flag the field not to be transferred to the database.

Therefore the only code needed to store an object into a DynamoDB table is to define an object:

[DynamoDBTable("Fruit")]

public class tFruit : Fruit

{

[DynamoDBHashKey]

public new string Name { get; set; }

 

public string Colour { get; set; }

}

And then create an instance of the object and save it to the DynamoDb context.

AmazonDynamoDBClient client = new AmazonDynamoDBClient();

DynamoDBContext context = new DynamoDBContext(client);

Fruit banana = new Fruit()

{

Name = "banana",

Colour = "yellow"

};

 

context.Save(banana);

Getting it back out is even easier:

Fruit bananaFromDb = context.Load("banana");

This is a very nice and simple solution if you are defining your own classes.

I was impressed with the SDK and it looks a simple job to get data in and out of the database, so this could well be a viable option if a NoSQL solution is required; especially if Amazon’s claims about scalability hold up. There is an entity size limit of 64kb, which could be a limiting factor (Table Services allows 1mb) and the lack of an offline emulator for development could also cause issues.

All in all though, this is up to the usual high standard of tools that AWS produce.

Read more posts by Andy Still at Intechnica’s blog