Random Acts of Kindness – Story of Full Glass of Milk

I read this story (reproduced here verbatim) today in TOI in speaking tree edition. It is such a beautiful story, it just touched my heart. Several times, we think that how small little nice things that we do willn’t have any visible impact and instead aim to do bigger things only. The takeaway from this story, for me, is the gentle reminder that it doesn’t have to be that way. Even though you mayn’t always see, everything always starts small – be it changing oneself or changing the world.

“One day, a poor boy who was selling goods from door to door to pay his way through school, found he had only one thin dime left, and he was hungry. He decided he would ask for a meal at the next house. However, he lost his nerve when a lovely young woman opened the door.

Instead of a meal, he asked for a drink of water. She thought he looked hungry and so she brought him a large glass of milk. He drank it slowly, and then asked, “How much do I owe you?”

“You don’t owe me anything,” she replied. “Mother has taught us never to accept pay for a kindness.” He said, “Then I thank you from my heart.” As Howard Kelly left that house, he not only felt stronger physically, but his faith in God and man was strengthened also. He had been ready to give up and quit.

Years later, that young woman became critically ill. The local doctors were baffled. They finally sent her to the big city, where they called in specialists to study her rare disease.

Dr. Howard Kelly was called in for the consultation. When he heard the name of the town she came from, he went down the hall of the hospital to her room. Dressed in his doctor’s gown, he went in to see her. He recognized her at once. He went back to the consultation room determined to do his best to save her life. From that day, he gave special attention to the case.

After a long struggle, the battle was won. Dr. Kelly requested from the business office to pass the final billing to him for approval. He looked at it, then wrote something on the edge, and the bill was sent to her room. She feared to open it, for she was sure it would take the rest of her life to pay for it all. Finally she looked, and something caught her attention on the side of the bill. She read these words:


Dr. Howard Kelly.”



Is Software Eating our World in India ?

Just finished reading Marc Andreessen’s article on “Why Software Is Eating the World“.  An excellent read and worth your time, esp if you are not “seeing” what’s happening around of late.  As Simon Wardley comments on it, this article talks about mostly “what” and not “why”.

After reading this, I couldn’t help thinking and compare if it is really the case of software eating everything in our beloved country.

In the context of India, let me take a look at the industry examples that Mark talks about in his article and see if software is ruling the world.

Books – While it is known fact today that Amazon would launch the store in India sometime early next year, the only other name one can think of today is FlipKart.com. Beyond the top few cities, I don’t think if anyone knows this brand or uses it to buy books.  Back in 1999, we built india’s first online music store for a startup called Fabmall. We launched the original site in 9 weeks with complete payment integration from CitiBank (This was the first payment gateway implementation for CitiBank also and I still remember the integration issues and debugging them late into night, sitting in CitiBank office in Bangalore).  They went on to add Books, Electronics, Grocery and many more along the lines of Amazon.com.  I remember ordering groceries in the middle of the night to be shipped next day and I thought life was good. Alas, they were too early to the Indian market and couldn’t sustain for long.

What do I do for the books today? I rarely buy from Brick&Mortar stores. Mostly order kindle books and read-on laptop/tablet.  Costs less, delivered faster than any other approach and pretty much any book you can buy. On every count, they beat the physical stores.  Great, but “I” and people like me represent a miniscule percentage of India’s total population that buy books from online stores. Rest of the India continues to buy books from physical stores and probably for foreseeable future.

Music – Same story as books.  While the magnetic tape is dead in top tier-1 and tier-2 cities,  it still thrives in the rest of the population. While the younger generation has switched over to digital music, thanks to the countless clones of iPods, I haven’t seen anyone buying the music online. And where would they buy if they want to ? Internet connection at home is still a luxury in tier-3 cities and below.

In fact, when I asked my sister’s kid recently as to where she gets her music collection from, she looked at me with surprise as if the answer was obvious and replies  “Friends”. And where do these “Friends” get their music from ? The other “Friends”. This is where the Indian Jugaad comes to the rescue 🙂  You would find these enterprising guys in every corner of the main streets with a small one-room shop. Some just migrated from magnetic tape copying and CD burning business to digital versions. These guys, who usually have a techie friend (or a friend who knows a techie friend…) that can pirate latest music from torrents, would help you copy the latest music in bulk into your phone/digital player for just few rupees.

Video & Entertainment – Clearly, no one knows about NetFlix in India (except for the people like me who have directly worked with them or people working in MNCs visiting their US counterparts or people who closely monitor technology). Yes, some of us have youtube access and some download torrents regularly for latest movies etc. But to a large extent, it is still the movie halls & TV that dominate the scene. Even the online-ticketing is limited to top tier-1 cities. On the TV front, only now we are seeing some DTH providers transmitting half-a-dozen  or so HD channels.

LinkedIn –  I have been recruiting engg people in the past and I know several of my friends who recruit people into IT companies regularly. It seems that no one uses LinkedIn to recruit people although it sounds obvious that they should. We still rely heavily on the middle-men (recruiting agencies) and largely seem to be comfortable.  There are some other reasons, but I hope that this would change very soon (unlike the above)


Some interesting numbers from a different brick & mortar world. RBI estimates that only about 20% of Indian population have access to banking. Even if you take out the people within group of < 20 years, this is very sparse. With a total of about 84K branches (of all commercial and state run banks), only 5% of the villages have access to banking.

I can go on with more and more examples, but they all look like same story.  Our evolution to using the latest technology services doesn’t necessarily follow the steps the developed world has taken, and for a good reason. We probably will bump over several of the steps and catch-up with the latest. Cell phone is a good example.  This is one technology you would see in every corner of the country. You may find it difficult to find post office, internet connection, but you can pretty much expect someone having a cell phone.

But the analogy stops there. Our use of cell phone (again beyond the very small percentage of people who own connected smartphones) is pretty much limited to SMS and voice calls – what it was originally meant for.

I think that the root of the problem is still lack of widespread Internet connection. While we figured out some workarounds (digital music piracy examples), we are not gonna see a cell phone like revolution in other industries without widespread Internet connectivity.  I wonder what would drive this?

Some of us will continue to write software to eat the “worlds” out there, but not here and not now.

Self-service Configuration for Auto-scaling Cloud Applications

Before the cloud, deploying an application into production meant downloading the pre-requisite software, application code, installing pre-requisite software, installing application code, configuring it from CLI/UI, switch configuration files to use production setup (and not test setup), tweak them to access production databases/servers etc etc. If your operations guy was organized, he would have a lengthy check-list of things to do. And god bless you, if your application requires Microsoft Sharepoint, SQL Server, BizTalk etc. They had several pages of check-lists and instructions on how to set them up. Your luck falls out very quickly if your environment had two or more o these servers to be installed – with all the dependencies, service packs and what not.

But if you are one of those forward looking people and burnt your fingers earlier, you would have probably automated several post-installation configuration procedures. Cloud brings this culture of automation to masses. If you are directly using IaaS service, pre-configured VMs takes care of many of the OS and application stack installation issues (If you are running your application on top of a PaaS (Platform as a Service) service, one donot even need to worry about VMs, as the PaaS platform takes care of them. But PaaS is different issue to talk about – probably for another post).  Now, you only worry about automating the provision your application instance. And this is the crux of this post.

If you really want to take advantage of load-balancing and auto-scaling in cloud, human-driven automation of application provisioning is not good enough. And here are some reasons as to why:

  • You want to dynamically add more instances (with some constraints and bounds) as the load increases, but the last thing you want is to wake up your operations guy in the middle of the night and ask him to run the automated script on the new instances before it can go live.
  • You have deployed HA using Active-Active  or Active-Passive setup and you need the instances to come back up online automatically right after the failure and switch-over.
  • Sometimes instances go down due to bugs or memory leaks. You need new instances to be brought up to continue to handle the traffic as if nothing happened.
  • You want to make sure that your system is ready to deal with any unknown failures – as part of this, your QA/Test infrastructure needs to bring-down various application instances randomly and see if the system recovers.

To achieve the above, you need to design  your application instances to obtain the provisioning/configuration information dynamically once it has come up.  There are couple of ways to do this, in the listed priority order:

1. Get it from known location –  In this design approach, an application instance reaches out to the central configuration repository to pull-in the necessary configuration.  This central configuration repository could be your own server serving the configuration or it could be built on top of other highly available cloud services such as Amazon’s SimpleDB.

Discovering the central repository itself could happen via a limited broadcast message (within a sub-net) or by embedding the repository server identification information (as a DNS name for e.g.,) in the application instance image itself.  Most high-traffic sites in the cloud are designed this way.

2. Baked cookies – This is the easiest approach to begin with.  Use one of the VM cloning methods provided by your cloud service provider to create a golden image of freshly configured instance. And use this golden image to spin-up new instances. The only down side with this approach is that with every new patch or new version of your application, you need to re-create the golden images.

Many well-known and high-traffic sites like Zynga and Netflix use similar techniques. Back in apigee days working with Netflix as our customer, I remember they using a mix of both the approaches for different instance types (second approach for soft appliance and the first approach for the application server instances).

Once you have figured out how you can bring up your instances without requiring manual/human-driven automated scripts for provisioning, solving the above use cases is trivial matter of  working with your cloud vendor’s load balancer ( for e.g., Amazon’s ELB) or cloud management system (such as RightScale) to configure for auto-scaling and high availability.

Provide free access to your whitepapers

In this age, when one can get quality information from various sources in real-time, why is that some companies still hold on to the age-old technique of asking users to fill-up a lengthy registration form to download or get access to company whitepapers?  Worse, some companies force you to do this for every single white paper – there is no sense of a session or login or an account, even one is prepared to go through that trouble.

Look at this example on the right. Why would anyone give out so much of information just to access some whitepaper. They are all mandatory. Knowing that most of the whitepapers are vendor solution biased, why would I get motivated to fill in this time sucker?

Also, look at the context as to how people find these whitepapers. Rarely, you go to a company and read the whitepapers. I mean, eventually you may do if you like that product or space and want to know more. But in reality, you come across these whitepapers when you are searching or reading some other article or a blog post. People don’t like to change the context and jump in the middle of the article to some other whitepaper and start reading it. I typically, right-click and open it in a separate tab (or download) and get back to that queue when I am done with that article.   And when you are back on to these new tabs, and find that they are all asking you to fill-in the forms, what chances are that one actually goes through them? Nah, I just kill those tabs and move on.

See you lost that opportunity of  few minutes of my time!!

What is surprising is that even some companies who are in the business of security, identity, federated authentication etc  adopt the same ridiculous fill_in_the_form_first approach. Why is that a simple email is not good enough for you?  If registration is so important to access these articles, why not implement OpenId or a similar federated authentication technique.

If the goal is to get as much information about the user as you can before he gets access to whitepaper so that you can send whole bunch of unsolicited spam, then you are already lost.  Nobody likes that. There are better ways of tracking other demographics about the interest level of users on your whitepapers.

Comments on “How Cloud Computing Will Change IT: 6 New Predictions”

Bernard Golden, CEO of HyperStratus, wrote an excellent article on his 6 new predictions for IT. For once, I really liked this “prediction” article. It is backed with some good experience and foresight.

He predicts that the cost of IT components (not just hardware, includes software as well) decreases precipitously. Referencing Jevon’s paradox, he also talks about how this results in increase in total IT spend( instead of decrease).

As I was reading, it occurred to me that as the confidence in SaaS applications grow, there will be a significant shift to move more and more enterprise IT applications to SaaS. But in the near future, IT still needs to maintain legacy applications.  Further, with ever growing SaaS applications in every possible domain one can think of, the cost effectiveness of maintaining legacy applications will be a big question, resulting in tighter budgets.

While the hybrid cloud (“application stretching”) solutions can help IT leverage cloud economics for legacy apps, but they cannot be a permanent solution. In the absence of an equivalent SaaS application, the only long term solution is to look for a green field approach i.e., build the application using cloud technologies.   Unless IT upgrades the current app development skill sets,  cloud application development is going to be a tough game to catch up with. PaaS platforms could significantly reduce the steep learning curve. Since not all enterprise applications have the same scalable requirements as that of Web2.0 applications, I wouldn’t be surprised to see newer PaaS platforms hiding all the cloud stuff underneath and provide the familiar enterprise development environment/frameworks. Infact, RedHat’s OpenShift is a good example of steps towards that –  they are probably the only known PaaS vendor with J2EE support. Microsoft’s Azure is another PaaS platform to watch out for enterprises who invested in DotNet technology.

Another interesting aspect to look at is that as more and more SaaS services are adopted by enterprises, there is going to be definite need for IT to develop custom applications that integrate services from many SaaS applications. One approach is look at solutions like IBM’s CastIron. Another approach would be an interesting opportunity in the PaaS area i.e., a PaaS platform focused on integration in the cloud.

Dalai Lama on Man’s Life

The Dalai Lama was asked what surprised him the most; he said,

“Man, because he sacrifices his health in order to make money. Then he sacrifices money to recuperate his health. And then he is so anxious about the future that he does not enjoy the present; the result being that he does not live in the present or the future; he lives as if he is never going to die, and then he dies having never really lived.”


Source:  Internet – self-improvement meetup group.


Simple, but true for most of us!!


Weekly Random links of Interest – FightASystem, Tech News

This week’s interesting links from my RSS feeds:

First, this amazing story of Manu Sharma on how he sent a baloon to space and photographed venus for less than $200.  wow!!  Read and draw some inspiration.

Individuals fighting a "System"

Other Tech News

  • Google demos search by image.  Watch this. This is going to result in some very interesting applications.
  • NotionInk Weathers The Storm, plans on releasing the Adam 2 Android tablet in December. Several months back, I wrote about this. I was eagerly waiting to buy one. Good that I didn’t. They had a good product and took very long to release. They failed to capitalize on their software fixing Android to run on tablets – much before Honeycomb was out. Till date, they donot have Honeycomb build yet. While I didn’t like all the negative publicity that CrunchGear meted out to them, they are right on total failure of NI’s marketing.
  • MIT, Harvard use nanotechnology to monitor single living cell in real-time. A nanosensor that attaches to surface of a cell – much like a virus ?  what next ?
  • A little bit of history(1 & 2) of how Javascript came about. If you have grown up with web, you would want to know this.

OpenFlow – Link digest

I spent last few hours catching up on OpenFlow and what the blogosphere is thinking about it. Of late, there have been lot of blog posts on OpenFlow, what it is and what is not etc. With few companies starting to build products around and as they showcase some early OpenFlow based technology in conferences and industry shows, the tech media goes wild in making OpenFlow as the panacea for all problems (including world peace and keeping the icecream in your refrigerator cold during a power outage)

I came across some really good blog posts on this topic today. While some believe that this is a transformative and revolutionary concept, there are several in the other extreme dismissing it as “failure before it started”.  Martin does good job in his blog posts in clearing some myths around OpenFlow.

So, here are some links:

Martìn Casado (CTO, Nicira Networks)

Greg Ferro (Freelance network architect/engineer)

Ivan PepeInjak‘s blog

VMWare community blog

Few other interesting posts

You definitely want to check out these sites as well:



Weekly Random links of Interest – ITaaS, Weiner’s downfall, LulzSec and BitCoins

Here are the random links of interest for this week. Have a good weekend.

IT as a Service – A stateless infrastructure architecture model.

Another great post by Lori MacVittie. Cloud is all about shifting to “service” mindset. “Service” is also the foundational concept behind SOA. Every cloud service model (IaaS, PaaS, SaaS) abstracts and decouples certain aspects from the layers below, resulting in a flexibility never seen before. Take the case of IaaS. By abstracting and virtualizing compute, storage and network, we are decoupling applications from underneath physical resources. This ultimately results in the unprecedented flexibility to move applications/workloads (VMWare’s vMotion, Cisco’s OTV etc) across machines in data center and across data centers – for load balancing, availability etc. However, this shouldn’t affect how the services (applications) are consumed. SOA’s way of achieving this is via WSDLs and service registries. At the code level, programmers would immediately recall  Martin Fowler’s Dependency Injection and Inversion of Control principles achieving similar objectives.

By looking at applications as “services” and adopting SOA principles, ITaaS can achieve benefits beyond IaaS.

Twitter & Anthony Weiner’s downfall

Must read. This articles ends with the following quote 🙂

“The details of web product design had led to the pants being pulled down on a promising political career.”

To me, the interesting part of the story is not how a promising politician’s career was put to an abrupt ending but the key product decisions that twitter team made early in the process and how those decisions changed lives of everyone in the last few years.

While twitter has more issues (here are some I face in my daily use of twitter) to solve, clearly the ability to follow someone without requiring his explicit permission has been the winner from the day one.

via @timoreilly

We screw each other over for a jolt of satisfaction

That’s a cheesy title, but one should read it. I have been following the @LulzSec twitter stream for the last one week as they are hacking sites (Sony, CIA, etc) and posting plain text user ids and passwords extracted from site’s internal databases for everyone to see.  In the beginning, I thought hacking sites that ignore basic security mechanisms (SQL Injection, Stronger passwords, Not storing plain text passwords of users in the DB etc) would send a messages to companies and IT organizations across to re-look at their web applications.  And to some extent, @LulzSec may have achieved this purpose. In the recent past, we have not seen such consistent and systematic hacking of sites and it got everyone’s attention about the continued ignorance of basic security practices in web applications.

But the subsequent act of @LulzSec posting the extracted usernames and passwords on public sites for everyone to see and download is a disastrous step. You cannot blame and punish naive internet users for having simple and same passwords across several sites when the so called “expert” application and system developers are not doing a good job in applying basic security practices to begin with. Look at what is happening now: These publicly available usernames and passwords are tempting many normal folks to try and access the same username and passwords on several other sites (facebook, paypal, gmail etc) and see if it just works.

Here is another who user went one step ahead and created a script to automate this process and posted the script itself on the github for everyone to use and try. Too bad.

Given that @LulzSec is so active on twitter, how long is it before they get caught ?

BitCoin, The New Money

If you haven’t heard of BitCoin, here is your hance to mint your own money, virtual money for free 🙂 Read more about here, herewikipedia link and of course yours truly quora link, answers for all your questions.

I came across BitCoin on the hackernews.com. After that so many people are posting links to news about bitcoin, one impatient guy couldn’t bear it any longer and wrote a safari extension to hide all bitcoin news on hackernews.com 🙂


Random Links – Week of 6th June 2011

Here are some random links that grabbed my attention during the last week:

  • 411 services. Builder Twitter apps without ever coding to twitter API.  You get to reserve a keyword and when someone replies with that keyword, you have a choice to return a static content or dynamically generated content via webhook (CGI script).  Nice.
  • Evaluating Text Extraction Algorithms. If you have ever done a project involving extraction of text from HTML documents, you would find this interesting. While you are at it, you may also want to look at author’s earlier posts. Lots of information out here.  Several years back, when I spent some time on this problem trying to detect the layout of the page and remove the clutter, I find none in the state of the art. Glad to see so many solutions out there now.
  • RightScale launches Hybrid cloud solution. Looks like the trains for hybrid clouds started arriving. With the argument about private clouds resolved and that they are here to stay,  hybrid clouds enablers are just what could make some of these enterprises look at public cloud.