DynamoDB up to the usual high standards of AWS tools: A NoSQL comparison

This blog post was originally written by Intechnica’s Technical Director Andy Still.

With the pace of new product releases on Amazon AWS it is often not possible to get more than a quick look at any new development. One recent development that has really caught my interest however is the new DynamoDB offering.

Essentially this is Amazon’s offering for the highly scalable NoSQL database market.

Like everything else in AWS it is charged on a pay by use basis, however you have to specify in advance the level of read/write access you will require and Amazon will set aside that level of capacity guaranteed for you. This value can be set via the console or in real time via the API. To be honest I found the pricing model a bit confusing, I wasn’t 100% sure what happened if you fell above or below the levels you set or how the actually billing rate was calculated. Looking at the real time billing information in my account didn’t clear this up. I think if I decide to use DynamoDb in any sort of anger I will need to contact Amazon for clarification.

Most of my NoSQL experience has been using Azure Table Services so my initial reaction is to compare the two.

In essence they work in a similar fashion. They are large key/value datastores where defined tables allow for the storage of entities containing an arbitrary amount of attribute/value combinations that are stored against a key – called a hash key. Keys can be grouped in ranges, identified by a range key. If a range key is specified then the hash key/range key combination must be unique for that table, otherwise all hash keys for a table must be unique. For those familiar with Table Services, hash key is the equivalent of primary key, range key is partition key – however range key is optional in DynamoDb whereas partition key is compulsory in Table Services.

That’s about as much as detail as I want to go into about the actual service itself. For more details the Amazon literature is here: http://aws.amazon.com/dynamodb/

Let’s get instead into a bit of actual usage of this, particularly the AWS .net SDK interface into it. All API calls can be made direct via JSON based REST calls but I’ve been testing using the SDK direct from .net.

First step is to install the AWS Visual Studio Toolkit (http://aws.amazon.com/visualstudio/). This will install all the SDKs for the various AWS services, including Dynamo DB, as well as the AWS toolbar for Visual Studio and a new AWS project type that comes with the correct references already added.

The AWS toolbar is a useful tool giving you access to control some AWS services from within Visual Studio, including creating DynamoDb tables and viewing/editing their content.

A sample of the DynamoDB schema. Image: aws.amazon.com

I used the toolbar to create some initial test tables. A few points to mention about the creation procedure:

  1. At creation time you need to specify whether the table with have a range key or not. This cannot be changed later. This struck me a potential issue if you incorrectly judge how much a table will grow to and want to introduce ranging later on.
  2. At creation time you also have to specify the level of read/writes per second you want assigning to this table. As I mentioned earlier I found this slightly off-putting, primarily because I actually had no idea. As this was a test table it may require a few reads/writes per second very infrequently but I had to commit to a permanent fixed allocation (this can be changed at any point).
  3. Amazon doesn’t currently offer an offline version for development so any testing has to be done on the live AWS service. This could result in running up bills if doing full production scale tests. AWS does offer a generous free tier for DynamoDB but it is always something to be aware of when using AWS.

Having got my tables created I could now start looking at how to enter data into them. The system I was looking to build in DynamoDB was a remote offshoot of an existing system that held a read only subset of the data. All data would be pulled from REST API calls as JSON objects and stored within AWS until read and cached in memory.

DynamoDB sounded like a good candidate for this task as I thought I could just try and read the JSON objects into .net objects and store them direct in DynamoDB.

The first API example I looked at (http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/LoadData_dotNET.html) used a document object, assigned attributes to it then pushed it to the remote table

 Amazon.DynamoDB.DocumentModel.Table replyTable = Amazon.DynamoDB.DocumentModel.Table.LoadTable(client, "Reply");


var thread1Reply1 = new Document();

thread1Reply1["Id"] = "Amazon DynamoDB#DynamoDB Thread 1"; // Hash attribute.

thread1Reply1["ReplyDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(21, 0, 0, 0)); // Range attribute.

thread1Reply1["Message"] = "DynamoDB Thread 1 Reply 1 text";

thread1Reply1["PostedBy"] = "User A";



Although this syntax is pretty straight forward I didn’t really like the look of it. To convert my existing objects would be either a manual process including a lot of magic strings or an overly complex process of mapping data attributes to document attributes.

Luckily AWS have provided a solution and implemented the second solution for you. They have created a number of attributes that you can decorate your class with to define it as a DynamoDB class. These can then be read directly to and from the db.

The minimum attributes you need to define on a table are:


public class ClassName


[DynamoDBHashKey]   // hash key

public int Id { get; set; }

This will then save all data to the table TableName with the hash key of Id. You can also use the DynamoDbProperty attribute to define that it should be read to and from the database and even use DynamoDbProperty(“dbFieldName”) to associate it with differently named attributes in the database table. DynamoDbIgnore will flag the field not to be transferred to the database.

Therefore the only code needed to store an object into a DynamoDB table is to define an object:


public class tFruit : Fruit



public new string Name { get; set; }


public string Colour { get; set; }


And then create an instance of the object and save it to the DynamoDb context.

AmazonDynamoDBClient client = new AmazonDynamoDBClient();

DynamoDBContext context = new DynamoDBContext(client);

Fruit banana = new Fruit()


Name = "banana",

Colour = "yellow"




Getting it back out is even easier:

Fruit bananaFromDb = context.Load("banana");

This is a very nice and simple solution if you are defining your own classes.

I was impressed with the SDK and it looks a simple job to get data in and out of the database, so this could well be a viable option if a NoSQL solution is required; especially if Amazon’s claims about scalability hold up. There is an entity size limit of 64kb, which could be a limiting factor (Table Services allows 1mb) and the lack of an offline emulator for development could also cause issues.

All in all though, this is up to the usual high standard of tools that AWS produce.

Read more posts by Andy Still at Intechnica’s blog



  1. I find DynamoDB to be amazingly confining. Guessing ahead of time how much traffic you wil have and what your throughput needs are for example. If I wanted to play DBA I would stick with Oracle and the RDMBS world generally.

    But if you can stand to write and read Java I guess you can learn to put up with anything. 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: