5 tools non techies can use to manage data

When it comes to startups, dealing with data is an integral task. But most of us aren’t qualified to deal with huge data sets. To help you manage data better, we bring you tools which can make your task easy.

Visualizations with D3.JS

Visualizations with D3.JS

Open Refine – Most datasets consist of inconsistencies and errors which are needed to be cleaned before use. Data errors can be caused due to different date formats used for the same day, typing errors made during data entry or just extra spaces where there shouldn’t be any. Spreadsheets can have duplicate entries, or entries that should be split into two (or more) entries. These can be hard to find. These can be a one-offs problem, or can span across the entire dataset, such as spelling a person’s name or location differently each time. Finding and correcting these by hand is time consuming and comes with the risk of making new errors when trying to correct the old ones. Open Refine highlights the possible errors and helps fix the problems across the entire dataset. It also helps in re-structuring and re-formatting data and merging data with other datasets apart from translating the data into other languages.

Yahoo Pipes – Yahoo Pipes offers a very wide range of tools to mix and modify data, from very basic to advanced. The filters are particularly impressive. You can create a pipe to either define the filters at the initial stage or allow users to enter their own search terms through a form. Filters can include words, locations, latitude and longitude, regular expressions and many more. One of the great things about Yahoo Pipes is that you can take feeds from multiple sources, merge them into one and then filter and use the results for any application which accepts data feeds.

Google Fusion Tables – One of the best tools for visualization for the non techies. Just upload or link to your data source and leave everything to Google Fusion. This tool is still under experiment and is available under Google Labs. You can create different kinds of charts and graphs with it. It also lets you create maps by automatically scanning your dataset for location data and geocodes it accordingly to point on the map, without any intervention.

Heatmaps with Google Fusion Tables

Heatmaps with Google Fusion Tables

D3JS – Though D3JS is primarily for techies, it deserves a mention as it is the best and most widely used visualization library available free of cost. D3 is a JavaScript library which stands for ‘Data-Driven Documents’. It can be used to create diverse creative charts, diagrams and maps which can be embedded anywhere on your site. D3 has a steep learning curve and is most preferred by techies.

Open Heat Maps – You can create static maps and interactive animated maps which enable people to see data visualizations change over time. You can map any dataset that is linked to an array of locations such as IP addresses, street addresses and longitude and latitude coordinates. You can upload spreadsheets from Excel or use Google docs as well. The best part is  that it can handle multiple plot points by blurring them into larger points if they are close to illustrate particular concentrations in certain locations.

Which tools do you use to handle your data, tell us?

Brand utility apps: how brands can leverage technology to create true brand affinity

Ilovebrand

In the hyper clutter space of advertisement what differentiates a good ad is interaction. The important thing is that once a brand has been able to catch people’s attention through an ad, they just can’t afford to waste the opportunity.

This is where digital marketing ads a lot of value. The medium lends well to triggering engagements and can be used to ‘help’ or ‘entertain’ a person. Unfortunately, a lot of digital marketing spends are still about ‘interrupting’ the person as opposed to adding value.

Welcome to the world of brand utility, where brands look to provide a useful service or give people something they actually need — without demanding an immediate return. Now with the massive adoption of smartphones and social networks and a range of startups and app developers, it has never been easier to leverage digital apps that provide a clear value to people and in return create a long lasting affinity for the brand.

In this article we will briefly look at four ‘brand utility’ examples where brands chose to market themselves by providing a utility value to people.

Stiegl – Free public transportation ticket on beer bottles

Design firm Demner, Merlicek & Bergmann came up with an ingenious way to help Stiegl dissuade people from drunk driving. Stiegl replaced the traditional label on their bottles of beer with a free ticket for public transportation. “The campaign not only helped to save lives but also promote Stiegl as a socially responsible beer producer,” the company claims.

Starbucks: Early Bird catches a discount drink

Starbucks connected its product to the act of waking up in the morning. They launched an app, called  ‘Early Bird’. The idea was to encourage people to wake up on time.

The app was just like an alarm clock. If the app users pressed ‘wake up’ instead of snooze, they would earn a discounted coffee or other drinks at any Starbucks store within one hour of waking up. This was a fun way to connect with the target audience while helping them get up on time.

Brand Utility Apps

Nestle – Dessert – chocolate recipe idea app

Instead of pushing a standard message, Nestlé built an app to provide ideas for desserts using the product.  The dessert app provided daily free recipes with black chocolate, dark roast, milk, white, caramel, praline or coffee. What they created was way more useful and interesting for the customers and hence got great adoption.

Sherwin-Williams -  ColorSnap app

Imagine spotting the perfect shade of paint while out on a walk — and being able to translate that image into a palette at the paint store. That was the idea behind Sherwin-Williams Co.’s ColorSnap Glass, a free mobile app that the Cleveland paint company launched for users of Google Glass.

But even if you don’t have Google Glass or any intentions of getting it, you could still try out ColorSnap’s technology via its free mobile apps for iPhones, Android and Blackberry.

ColorSnap mobile apps let users upload photos of their room and virtually try out more than 1,500 Sherwin-Williams colours, varying them by light and intensity until they find a shade they are happy with. They can then save the palette or share it via email or Facebook. ‘Advertising Age’ hailed the app as one of the Top 10 ‘Cool Branded iPhone Apps’.

The case studies mentioned above are entirely possible, eminently affordable and very effective.  This approach puts brands into the centre of people’s lives earning those brands attention and engagement.

Creative entrepreneurs and business leaders have an opportunity to leverage the advances in app development space to guide marketing departments in bringing life to branded utilities. These apps help provide clear value to people and create a long lasting relationship with each existing and potential customer.

Kaushal Sarda, KulizaAbout the author

 

Kaushal Sarda is the CEO of Kuliza Technologies www.kuliza.com. Kuliza builds mobile ready sites, apps and cross-device campaigns for businesses. They have worked with brands like Titan, Myntra, Intuit, Whirlpool, VanHeusen and Himalaya Drug Company.

 

Emkor Solutions partners with TARGIT to provide game changing BI Solutions

EmkorJune 13, 2013, New Delhi, INDIA:  Emkor Solutions and TARGIT today announced their partnership to deliver “Business Intelligence as a Service” across multiple platforms and enterprise data sources in India. Through this partnership Emkor will become exclusive partners of TARGIT BI in the India region.

These offerings will transform the way emerging and medium businesses use data analysis for making business decisions. The solution will include “SaaS” based BI services on the cloud to emerging and mid-market companies who are seeking strong Business Intelligence capabilities to keep pace with latest industry trends and create sustainable competitive advantage. With key business benefits like no vendor lock-ins and no upfront investment in IT infrastructure, Emkor will help businesses in reducing capital expenditure and improve operational efficiencies through better data insights.

Emkor is very professional team with an ambitious plan of enabling all companies using various business management platforms to get a much clearer picture of their business; they have the right team, the right mindset and now also the right software to succeed with this task. Emkor has all the technical and commercial capabilities that we are looking for in a Distributor and cloud partner; they are forward-looking and very meticulous in their work.” says, Mr. Flemming Madsen, Vice President Sales, TARGIT

The Indian BI market was estimated at $101.5 million as per Gartner in 2012 and is expected to grow at 15% year on year for the next five years. Corroborating this fact, Mr. Madsen alsoadds, “There is a huge market potential in India, lots of customers and the market is growing rapidly. Today it is important for all companies to be on top of their data to succeed in a world where things are continuously happening faster and faster and where worldwide competition is increasing”

Vikram Dham, EMKOR Solutions Limited

Vikram Dham, EMKOR Solutions Limited

This partnership will be a win-win situation for both the solution providers and will open new avenues to explore in terms of market size and reach. These services will be offered using Emkor Solutions strong network of partners and system integrators.

Flemming Madsen, Vice President Sales, TARGIT

Flemming Madsen, Vice President Sales, TARGIT

“We are happy to announce that Emkor has expanded its portfolio of cloud based solution by adding the world’s best Business Intelligence and Analytics tools powered by TARGIT.Our industry and domain knowledge, coupled with specialization of TARGIT suite of products, is a compelling offering of innovative & differentiated solutions for end customers” said, Vikram Dham, CEO & Co Founder, Emkor Solutions Limited.

About Emkor Solutions Limited

Founded in 2011 Emkor is among the few companies with focus in cloud offerings and has pioneered the concept of “Business Function as a Service (BFaaS™)”. Through this concept Emkor solutions is empowering fast growing companies to transform from people-driven to process centric organizations so as to Build Tomorrows Enterprise Today.  Learn more at www.emkor.com and Linkedin

About TARGIT

TARGIT is Danish Business Intelligence Developer Company headquartered in HjørringNorth JutlandDenmark. According to Gartner TARGIT is the world’s largest BI vendor for companies using Microsoft Dynamics NAV or AX. Targit has over 4600 customers with close to half a million named users. Learn more at www.targit.com, Twitter, Facebook, Linkedin and Google+.

Generic PaaS is not disruptive!

Change. Continuous change is what we have witnessed, since computing began way back in 1960s, we have had many transformational waves on how the Software is built, deployed and accessed; From the COBOL & mainframes to Client-Server & PCs, to the web, to multi-tier and the whole 9 yards of how code was written, arranged, deployed and managed.

transformations

And, then, cloud came with the mother-of-all-changes and changed everything across the board. Not just technology change, but also brought about innovative approaches to the business models and delivery models. Platform as a Service took this further and became to be known as a radical & transformational change! Now, let’s take a step back and observe what’s really happening. OrangeScape being a PaaS Company, I want to stay focused on PaaS and dig into that little deeper.

If there is one thing that differentiates “platform” from “product” is “program-ability”. Hence platforms are about programmers. So, the logical question to ask is what has PaaS done to developers? How are PaaS impacted the developers? I’d stick my neck out and say that Platform-as-a-Service from the giants that we see in mainstream today is an incremental step forward, and not radical or disruptive in the true sense of the term!  Of course there are aspects of disruptive innovation in the “DevOps/NoOPs” area.

So, let us look at What is disruptive? And, What is radical? Just when I was looking at a way to get my head around on this, I happened to recall the importance of 10X performance, made famous by Jim Collins in his latest “Great by Choice”. Even though that applies to the way the businesses perform, I see a parallel to the technology Innovation. And this week, Larry Page also warned the companies that are happy with the 10% improvement. If you’re looking for a 10% improvement and not living by the Gospel of 10X, then you’re basically doing the same thing as everyone does. Moon shots are radical, they are 10X improvement! Current mainstream PaaS offering from industry giants, even with all the noice around them, is incremental by the same rule.  Using PaaS, Developers still deploy the code on a middleware, but now on cloud – it is managed middleware. In my humble opinion, this surely much much much better that what we have pre-cloud/pre-PaaS, but falls short of radical or disruptive i.e. the 10X rule.

Ask the question : Has PaaS made a sizable impact on developer base? The answer is a big NO. I see mainstream PaaS vendors are attaching the same fixed developer base growing single digit every year. The only thing they have come up with is “polyglot” so that they can expand their addressable market. Net Developer growth stays same. When something is disruptive it forces many people to jump in. That hasn’t happened. How am I saying this. According to “Evans Data” there are around 16 million developers in total – this is a WW number. This number is growing only incrementally and NOT doubling or tripling.  This simply means that the mainstream PaaS offerings haven’t created enough disruption for the net developer population to grow exponentially. At OrangeScape Platform as Service we are embarked on this journey to create such disruption and growth in developer base – staying true our mission “Democratize Computing”. Only time will tell. :-) !

About the Author

Suresh SambandamSuresh Sambandam is the Founder and CEO of OrangeScape a global top 10 Platform as a Service Company. OrangeScape’s Visual PaaS helps creating business applications quickly and easily. OrangeScape is also the world’s only cross cloud platform. OrangeScape is featured in multiple research reports of Gartner and Forrester. OrangeScape has a marquee customers like Citibank, Unilever, Pfizer, AstraZeneca, Fullerton and the likes. OrangeScape has partnered with Tier 1 Services providers like TCS, Cognizant, Wipro and 5 others to support enterprise implementations.

How to Configure MongoDB HA Replica Set on AWS EC2

replicaIt has been always tedious task for choosing right configuration for MongoDB on AWS EC2. Choosing right configuration in this environment is always a challenging and it takes lots of time to make your system Production Ready.

You can use following configuration and steps to install MongoDB in EC2 environment for creating Production Ready HA replica set.

All it needs is two machines that will be used as PRIMARY(Master) and SECONDARY (Slave) node and one ARBITER machine for the replica set. However it might get changed based on your application requirement and you can opt for higher number of nodes based on your need. ARBITER is only required in case of even number replica set. If you want to maintain replica set with one PRIMARY and two SECONDARY, ARBITER is not required.

Hardware Requirement

  1. Two 64 bit EC2 instances of medium/large or higher configuration based on your app requirement for PRIMARY and SECONDARY node. There is a data storage limitation of using 32 bit machine and can only support up-to 2.5 GB of storage.
  2. A small 32 bit EC2 machine for MongoDB ARBITER.
  3. It is recommended to have machines in different availability zone to make it High available in-case shutdown of one availability zone.
  4. Use Ext4 EBS volume to support I/O suspend and write-cache flushing for multi-disk consistent snapshots.

Installation steps

  1. Create and Launch an EC2 instance of required configuration as stated above for PRIMARY, SECONDARY and ARBITER nodes.
  2. Create an EBS volume of required size to be used for MongoDB storage for both nodes.
  3. Connect to EC2 instances on PRIMARY and SECONDARY node via SSH
  4. Make a Ext4 file system on both node via sudo mkfs -t ext4 /dev/<Created_EBS_Volume>
  5. Create directory /data/db or any other of your own choice and mount it to attached volume using sudo mount -a /dev/<Created_EBS_Volume> /data/db
  6. Edit your /etc/fstab to enumerate it on start up of instance using sudo echo ‘/dev/sdf /data/db auto noatime,noexec,nodiratime 0 0’ >> /etc/fstab
  7. Download and Install MongoDB on all instances.
  8. Start the PRIMARY node with following command in MongoDB directory using mongod --rest --replSet myHASet (Where myHASet is the name of Replica set, you can choose any name of your choice)
  9. Go to Mongo treminal in MongoDB directory.
  10. Initialize the set using command rs.initiate() on mongo terminal
  11. Check the status of Replicat set after initialization using rs.status() command.
  12. If initialization is success you will see OK in the output something like this{
    "set" : "sample",
    "myState" : 1,
    "members" : [
    {
    "name" : "<PRIMARY_HOSTNAME>:27017",
    "self" : true
    }
    ],
    "ok" : 1
    }
  13. You can also check the status on http://<PRIMARY_NODE>:27017/_replSet
  14. Your Primary node is ready to use now. You can insert/update document on this node.
  15. Now start SECONDARY node with same command as on primary mongod --rest --replSet myHASet
  16. Now tell the PRIMARY node to add SECONDARY node in replica set. Go to mongo console on PRIMARY node and add this using rs.add(“<SECONDARY_HOSTNAME>”);
  17. If addition is successful you will see the response { "ok" : 1 }
  18. Once your SECONDARY node is attached to replica set you can check the status on  http://<PRIMARY_NODE>:27017/_replSet
  19. Now start the ARBITER node using mongod --rest --replSet myset --oplogSize 8
  20. Add the ARBITER node in replica set using command rs.add( { _id:2, host:”<ARBITER_HOSTNAME>”, arbiterOnly:true } )
  21. Once ARBITER is added successfully, you are done with the configuration and your replica set is ready to use.
  22. Got o http://<PRIMARY_NODE>:27017/_replSet and you should be able to see the status of each node as seen in below scree shot.
  23. replica
  24. To test the replica, take down the primary node, and see if SECONDARY is able to pick up and will become PRIMARY node.
  25. You can fire the command db.isMaster()  to check the status if SECONDARY node has turned up as Master node.
  26. You can additionally use horizontal sharding using shard cluster for scaling large volume of app data. Configuring shard cluster is not in the scope of this article.

Connecting Replica Set from JAVA API

After you have setup the replica set successfully, you can connect with using JAVA driver from your client application.

You can use following code snippet for making connection to replica set

1
2
3
4
5
List addrs = new ArrayList();
addrs.add( new ServerAddress( "&lt;PRIMARY_HOST&gt;", "&lt;MONGO_PORT&gt;" ) );
addrs.add( new ServerAddress("&lt; SECONDARY_HOST&gt;" , "&lt;MONGO_PORT&gt;"));
Mongo m = new Mongo(addrs);
DB db = m.getDB("&lt;NAME_OF_DB&gt;");

MongoDB driver is smart enough to connect to PRIMARY node only, in-case if PRIMARY node is down, it will automatically switch to another node for communication.

Here is an honest attempt to guide you to setup MongoDB on AWS EC2. Though this is an open forum and you all are open to post your comments if I have missed anything.

Also, if you don’t want to get into setting up the infrastructure and administration for MongoDB,  you can directly use our App42 NoSQL Cloud Storage Service. This service can be accessed using our REST API or using native platform SDKs available in different languages like iOS, Android,J2ME, JAVA, PHP, Ruby, Windows Phone and C#.

About the Author:

This article is written by Ajay Tiwari, Product Architect at ShepHertz Technologies http://www.shephertz.com

Tutorial: Installing and Configuring the AWS CLI

One of the biggest complaints from developers using AWS is the fragmentation of the command line tools. Each service uses its own set of tools written in a separate language. For example, EC2 command line tools are written in Java while Beanstalk tools are developed using Ruby and SES command line tools are based on Python. This makes it extremely difficult to configure and manage multiple AWS services from the command line.

Keeping this in mind, AWS has now developed a new set of command line tools called AWS CLI that consolidates various tools related to AWS. Its a unified set of tools that support popular services including EC2, RDS, Beanstalk, SQS, SNS, SES, CloudWatch and CloudFormation. This eliminates the need to install and configure separate tools for each service.

Here is a step-by-step guide to install and configure AWS CLI.

Step 1 – Download and install the AWS CLI

[crayon lang="shell"]

mkdir /opt/aws
cd /opt/aws
curl http://python-distribute.org/distribute_setup.py | python
curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python
pip install awscli

[/crayon]

Step 2 – Create a file called aws_credentials.txt and add the Access Key, Secret Key and the default Region

[crayon lang="text"]

[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = ap-southeeast-1

[/crayon]

Step 3 – Configure the AWS_CONFIG_FILE environment variable

[crayon lang="shell"]

export AWS_CONFIG_FILE=”/opt/aws/aws_credentials.txt”

[/crayon]

Step 4 – Test the configuration by typing the following command

[crayon lang="shell"]

aws ec2 describe-regions

[/crayon]

Below is a screencast of this tutorial

- Janakiram MSV, Chief Editor, CloudStory.in

Tutorial: Installing and Configuring Windows Azure Command Line Tools for Linux on Ubuntu

Windows Azure’s command line tools are traditionally available only on Microsoft Windows in the form of PowerShell Cmdlets. But with the June / Spring release of Windows Azure, Microsoft shipped the command line interface for Mac and Linux. This tutorial is a walk through of the steps of involved in setting up, configuring and using the Windows Azure command line tools on Ubuntu.
Windows Azure command line tools for Linux are built using Node.js. So, let’s start with the installation of Node.js.

Step 1 – Install Node.js
Run the following commands to install the latest build of Node.js

[crayon lang="shell"]

sudo add-apt-repository ppa:chris-lea/node.js

[/crayon]

[crayon lang="shell"]

sudo apt-get update

[/crayon]

[crayon lang="shell"]

sudo apt-get install nodejs npm

[/crayon]
Make sure that Node.js is installed and working by typing this command
[crayon lang="shell"]

node –version

[/crayon]

Step 2 – Install the Windows Azure package

We will now download the Azure specific packages for node.

[crayon lang="shell"]

sudo npm install azure –g

[/crayon]

Step 3 – Configuring the tools to use a specific Windows Azure account

This step will point the Windows Azure command line tools to a valid account of Windows Azure. You may have multiple active accounts but, the tools can only operate on one account at a time.
Running the following command will open the browser window with the sign in page which will redirect you to the publish settings file.

[crayon lang="shell"]

azure account download

[/crayon]

Once you have downloaded the publisher settings configuration file, point the tools to it with the following command

[crayon lang="shell"]

azure account import

[/crayon]

Let’s test the import by running the following command. If everything went fine, you should see the list

[crayon lang="shell"]

azure config list

[/crayon]

To switch to a different account, use the following command

[crayon lang="shell"]

azure config set subscription

[/crayon]

Step 4 – Trying out a few useful commands

To list all the storage accounts

[crayon lang="shell"]

azure account storage list

[/crayon]

To check the locations where the VM features are available
[crayon lang="shell"]

azure vm location list

[/crayon]

To check the locations where the web sites features are available

[crayon lang="shell"]

azure site location list

[/crayon]

Below is the screencast of this tutorial

- Janakiram MSV, Chief Editor, CloudStory.in

Tutorial: Accessing MySQL Data Service on Cloud Foundry through Tunneling

Cloud Foundry is an Open PaaS supporting many languages, runtimes, frameworks and services. Cloud Foundry exposes MySQL, PostgreSQL, MongoDB, RabbitMQ and Redis as services that offer the database and messaging capabilities. Developers can easily bind the applications to one of these services during the deployment.

For a detailed walkthrough on getting started with Cloud Foundry, you can refer to the tutorial that we published (Part 1, Part 2 and Part 3) on CloudStory.in a while ago.

After you point vmc to a specific Cloud Foundry target, typing vmc services will show you the available services. For example, after targeting http://api.cloudfoundry.com, vmc services will show the following.

[crayon language="shell"]
vmc services
[/crayon]

During the deployment, Cloud Foundry asks if you want to bind the application to any of the services. Once you choose a specific service, it can be accessed during the runtime through the VCAP_SERVICES environment variable. But many times, developers need to access the services like MySQL, PostgreSQL and MongoDB directly to manage the database. This is where Cloud Foundry tunneling comes into the picture. In this tutorial, we will access MySQL from the local machine through the popular MySQL Workbench.

First, let us make sure that the MySQL data service is provisioned for us. We can check this by typing vmc services and looking under the provisioned services.

Now, let’s install the Caldecott gem to enable tunneling. Caldecott is a simple Ruby gem that enables port forwarding on the local machine.

[crayon language="shell"]
gem install caldecott
[/crayon]

With Caldecott in place, it’s time for us to create the tunnel. We do this by typing vmc tunnel.

[crayon language="shell"]
vmc tunnel
[/crayon]

Note that this command shows the provisioned services. After we select that, the next step is to choose the mysql tool that can be invoked directly by the tool.
If the mysql command line is in path, choosing option 2 will launch it with appropriate parameters. But by choosing 1, we will able to use any tool that can connect to MySQL In this case, we will use MySQL Workbench.

Launch MySQL Workbench and click on New Server Instance under Server Administration.

Select Remote Host and enter 127.0.0.1. Do not enter the port number at this point.

In the next step, enter the port number that vmc tunnel has shown followed by the username and password.

Click continue to see the confirmation.

Finally, double clicking on the server will launch the query editor.

Now, you can run queries and perform any operations on the database provisioned at CloudFoundry.com.

- Janakiram MSV, Chief Editor, CloudStory.in

Tutorial: Adding a Custom Domain Name to Windows Azure Web Site

One of the recently added features to Windows Azure is the Web Sites. It gives developers a chance to deploy ASP, ASP.NET, Node.js and PHP web sites with no friction. Typically, the deployed websites are accessible through http://yourwebsitename.azurewebsites.net. While the free threshold doesn’t support attaching a custom domain name, moving the web site to the reserved mode enables us to point a custom domain to it. This tutorial shows you how to configure an external DNS service for a website deployed to Windows Azure Web Sites.

This tutorial assumes that you have a valid domain already registered with a registrar like Go Daddy. To separate the DNS management from the domain registrar, we will sign up with DNSMadeEasy.com. This will let us manage the DNS independent of the domain registrar and the hosting platform. The other advantage of DNSMadeEasy.com is the programmability of the DNS through the REST API.

There are three steps involved in adding a domain name –

  1. Configuring Go Daddy to point the Nameservers to DNSMadeEasy
  2. Adding the URL of the Azure Web Site to DNSMadeEasy to the CNAME
  3. Updating the hostname at the Windows Azure Management Portal

Prerequisites

  • A registered domain
  • Trial account at DNSMadeEasy.com
  • Web Site successfully deployed at Windows Azure listening at http://yourwebsitename.azurewebsites.net

Step 1 – Configuring Go Daddy to point the Nameservers to DNSMadeEasy

Login to DNSMadeEasy.com and click on the DNS menu at the top to select Managed DNS

Click on Add Domains button and type the domain name that is already registered with Go Daddy

After a few minutes, click on the domain name that is just added. Click on the Name Servers tab you see the list of Name Servers assigned to this domain.

Make a note of the Name Servers listed here. Notice that the FQDN reflects the Name Servers currently assigned by Go Daddy. We will change it by logging on to Go Daddy control panel. Launch the Domain Manager for the domain you want to configure in Go Daddy and click on Set Nameservers link.

Select I have specific nameservers for my domain option and enter the Nameserver addresses provided by DNSMadeEasy

Give it a few minutes for the DNS to propagate. This will point the domain to the Nameservers of DNSMadeEasy. Go Daddy control panel will confirm this after by showing the new DNS Nameservers.

Step 2- Adding the URL of the Azure Web Site to DNSMadeEasy

Currently, the website deployed at Windows Azure is available at http://cloudreadydemo.azurewebsites.net/

Switch to DNSMadeEasy and click on the domain that was created in the last step. Under the CNAME Records, click the + sign and add the endpoint of the Windows Azure Web Site. Enter www for the Name and the URL of the Windows Azure Web Site with a trailing period (.) at the end. This will instruct DNSMadeEasy to point the domain to the complete URL instead of creating a sub-domain. Finally, click on Submit.

Step 3 – Updating the hostname at the Windows Azure Management Portal

Since we can only add the domain to the websites running in the reserved mode, we will first move the website from the shared mode to reserved mode. To do this, login to Windows Azure Management Portal and select the website that you want to configure. Click on Scale and switch to reserved mode.

After that is done, click on configure, enter the custom domain name under the hostnames section and click on save button at the bottom.

Windows Azure will check if the CNAME records are already configured for this URL and if it finds, it will accept the hostname.

Now our website is accessible through the custom domain.

Hope this helps you to easily configure a custom domain for your Windows Azure Web Sites.

- Janakiram MSV, Chief Editor, CloudStory.in

Tutorial: Getting Started with Cloud Foundry – Part 3/3

Part 1 of this article introduced Cloud Foundry and walked you through the configuration of Micro Cloud Foundry for the offline deployment. In part 2, we reconfigured Micro Cloud Foundry to go online and expose the deployed application on the public internet. In the final part, we will move the application to the Public Cloud running at CloudFoundry.com.

CloudFoundry.com is the Public Cloud that hosts Cloud Foundry. We can deploy applications to run on this environment just by targeting the endpoint, http://api.cloudfoundry.com. Let’s repeat a few steps to run our simple Ruby application on CloudFoundry.com. Now that we are familiar with the vmc tool, just run the following commands in the same sequence.

[crayon lang="shell"]
vmc target http://api.cloudfoundry.com
[/crayon]

Let’s login to the Public Cloud by entering the credentials that we provided during the signup process.

[crayon lang="shell"]
vmc login
[/crayon]

Finally, we push the application to CloudFoundry.com. Give a unique name to the app as it may be turned into a DNS name for our application.

[crayon lang="shell"]
vmc push
[/crayon]

We can access the application by typing the url http://hello-cf-jani.cloudfoundry.com/.

Note that the subdomain is a part of Cloudfoundry.com which is an indication that our app is running on the Public Cloud.

In this tutorial we have seen how to deploy applications running on Cloud Foundry in 3 different modes – 1) Micro Cloud Foundry in the offline mode, 2) Micro Cloud Foundry in the online mode and, 3) Public Cloud running at CloudFoundry.com.

- Janakiram MSV, Chief Editor, CloudStory.in