Extending on premise applications to the cloud

When you have an existing system, and you have a need to deploy a mobile app that won’t interfere with what’s already in place, the cloud can offer a great solution if managed correctly. With so many services on offer, there are many problems that cloud can be considered as a solution for, especially in projects where ” no one size fits all”. Such was the case for a recent cloud use case in mobile app development, as told in this SlideShare.

This presentation was given at the Amazon Web Services User Group UK meetup on 15th May, in London, England; it was written and delivered by Intechnica’s Technical Director Andy Still.

Read more blogs about cloud, development and application performance from Intechnica


Designing Applications for the Cloud

When designing applications for the cloud, or extending on-premise applications into the cloud, it should go without saying that you can’t just deploy and expect good results. There is much to consider from the very beginning as it relates to using cloud platforms fin development; these include scaling out, taking new and imaginative approaches to data storage, making full use of the wide range of products and services on offer from cloud providers (beyond hosting), and exploring the many flavours of hybrid solution which can mean all types of business can leverage the benefits of the cloud. These details are laid out further in the following SlideShare presentation.

“Architecting for the Cloud” is the theme for the upcoming Amazon Web Services User Group UK meetup (15th May, London). Intechnica’s Technical Director Andy Still will be there, and plans to talk about extending an application to create a caching platform for mobile access within AWS. If you’re in the London area this is definitely worth coming along to for the discussions and networking around AWS and cloud computing.

Read more blogs about cloud, development and application performance from Intechnica

Cloud Computing Use Case: Development & Test Environments

In a recent article “Put Your Test Lab In The Cloud”, InformationWeek outlined the pros, cons and considerations you must take into account when talking about hosting test labs in the cloud. Using the cloud for this purpose is not necessarily a new idea, and it’s one that certainly makes a lot of sense; Replication of test results depends upon consistency across all variables, and putting a test lab in the cloud allows you to do that from anywhere or for anyone who needs to use it.

Indeed, the use of private or public cloud services, like Amazon Web Services, as a platform for software development & testing, is common practice for some businesses already. The benefits of using the cloud for this include the general positives of cloud, such as cost savings (in terms of the lack of start up cost as well as hardware upgrades, maintenance etc. coming out of the equation), but also extend to specific benefits, like increased control over projects, quick duplication of environments (especially when compared to “tin” set ups), speed of deployment, ease of collaboration, and the ability for testers and developers to access environments on demand, removing a barrier to efficiency. It’s not hard to see why the practice it growing in popularity along with other cloud services.

To best understand the benefits of cloud computing in software development and test environments, it’s useful to see the process in action. We recently hosted a webinar showing the process in detail, from configuring a template for the environment, to launching and connecting remotely to the machine image. In our example, we used Amazon Web Services with a custom management tool, but the process is fairly standard.

It’s important to note that different considerations need to be made for each cloud service provider, especially when weighing up public and private cloud offerings. Obviously, it’s faster and easier to get started with a public cloud, but it can be harder to manage costs, and some would consider a layer of control to be lost. On the other hand, private clouds are costly and time consuming to set up in comparison, and it’s a much bigger consideration to justify.

DynamoDB up to the usual high standards of AWS tools: A NoSQL comparison

This blog post was originally written by Intechnica’s Technical Director Andy Still.

With the pace of new product releases on Amazon AWS it is often not possible to get more than a quick look at any new development. One recent development that has really caught my interest however is the new DynamoDB offering.

Essentially this is Amazon’s offering for the highly scalable NoSQL database market.

Like everything else in AWS it is charged on a pay by use basis, however you have to specify in advance the level of read/write access you will require and Amazon will set aside that level of capacity guaranteed for you. This value can be set via the console or in real time via the API. To be honest I found the pricing model a bit confusing, I wasn’t 100% sure what happened if you fell above or below the levels you set or how the actually billing rate was calculated. Looking at the real time billing information in my account didn’t clear this up. I think if I decide to use DynamoDb in any sort of anger I will need to contact Amazon for clarification.

Most of my NoSQL experience has been using Azure Table Services so my initial reaction is to compare the two.

In essence they work in a similar fashion. They are large key/value datastores where defined tables allow for the storage of entities containing an arbitrary amount of attribute/value combinations that are stored against a key – called a hash key. Keys can be grouped in ranges, identified by a range key. If a range key is specified then the hash key/range key combination must be unique for that table, otherwise all hash keys for a table must be unique. For those familiar with Table Services, hash key is the equivalent of primary key, range key is partition key – however range key is optional in DynamoDb whereas partition key is compulsory in Table Services.

That’s about as much as detail as I want to go into about the actual service itself. For more details the Amazon literature is here: http://aws.amazon.com/dynamodb/

Let’s get instead into a bit of actual usage of this, particularly the AWS .net SDK interface into it. All API calls can be made direct via JSON based REST calls but I’ve been testing using the SDK direct from .net.

First step is to install the AWS Visual Studio Toolkit (http://aws.amazon.com/visualstudio/). This will install all the SDKs for the various AWS services, including Dynamo DB, as well as the AWS toolbar for Visual Studio and a new AWS project type that comes with the correct references already added.

The AWS toolbar is a useful tool giving you access to control some AWS services from within Visual Studio, including creating DynamoDb tables and viewing/editing their content.

A sample of the DynamoDB schema. Image: aws.amazon.com

I used the toolbar to create some initial test tables. A few points to mention about the creation procedure:

  1. At creation time you need to specify whether the table with have a range key or not. This cannot be changed later. This struck me a potential issue if you incorrectly judge how much a table will grow to and want to introduce ranging later on.
  2. At creation time you also have to specify the level of read/writes per second you want assigning to this table. As I mentioned earlier I found this slightly off-putting, primarily because I actually had no idea. As this was a test table it may require a few reads/writes per second very infrequently but I had to commit to a permanent fixed allocation (this can be changed at any point).
  3. Amazon doesn’t currently offer an offline version for development so any testing has to be done on the live AWS service. This could result in running up bills if doing full production scale tests. AWS does offer a generous free tier for DynamoDB but it is always something to be aware of when using AWS.

Having got my tables created I could now start looking at how to enter data into them. The system I was looking to build in DynamoDB was a remote offshoot of an existing system that held a read only subset of the data. All data would be pulled from REST API calls as JSON objects and stored within AWS until read and cached in memory.

DynamoDB sounded like a good candidate for this task as I thought I could just try and read the JSON objects into .net objects and store them direct in DynamoDB.

The first API example I looked at (http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/LoadData_dotNET.html) used a document object, assigned attributes to it then pushed it to the remote table

 Amazon.DynamoDB.DocumentModel.Table replyTable = Amazon.DynamoDB.DocumentModel.Table.LoadTable(client, "Reply");


var thread1Reply1 = new Document();

thread1Reply1["Id"] = "Amazon DynamoDB#DynamoDB Thread 1"; // Hash attribute.

thread1Reply1["ReplyDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(21, 0, 0, 0)); // Range attribute.

thread1Reply1["Message"] = "DynamoDB Thread 1 Reply 1 text";

thread1Reply1["PostedBy"] = "User A";



Although this syntax is pretty straight forward I didn’t really like the look of it. To convert my existing objects would be either a manual process including a lot of magic strings or an overly complex process of mapping data attributes to document attributes.

Luckily AWS have provided a solution and implemented the second solution for you. They have created a number of attributes that you can decorate your class with to define it as a DynamoDB class. These can then be read directly to and from the db.

The minimum attributes you need to define on a table are:


public class ClassName


[DynamoDBHashKey]   // hash key

public int Id { get; set; }

This will then save all data to the table TableName with the hash key of Id. You can also use the DynamoDbProperty attribute to define that it should be read to and from the database and even use DynamoDbProperty(“dbFieldName”) to associate it with differently named attributes in the database table. DynamoDbIgnore will flag the field not to be transferred to the database.

Therefore the only code needed to store an object into a DynamoDB table is to define an object:


public class tFruit : Fruit



public new string Name { get; set; }


public string Colour { get; set; }


And then create an instance of the object and save it to the DynamoDb context.

AmazonDynamoDBClient client = new AmazonDynamoDBClient();

DynamoDBContext context = new DynamoDBContext(client);

Fruit banana = new Fruit()


Name = "banana",

Colour = "yellow"




Getting it back out is even easier:

Fruit bananaFromDb = context.Load("banana");

This is a very nice and simple solution if you are defining your own classes.

I was impressed with the SDK and it looks a simple job to get data in and out of the database, so this could well be a viable option if a NoSQL solution is required; especially if Amazon’s claims about scalability hold up. There is an entity size limit of 64kb, which could be a limiting factor (Table Services allows 1mb) and the lack of an offline emulator for development could also cause issues.

All in all though, this is up to the usual high standard of tools that AWS produce.

Read more posts by Andy Still at Intechnica’s blog