Is Software Eating our World in India ?

Just finished reading Marc Andreessen’s article on “Why Software Is Eating the World“.  An excellent read and worth your time, esp if you are not “seeing” what’s happening around of late.  As Simon Wardley comments on it, this article talks about mostly “what” and not “why”.

After reading this, I couldn’t help thinking and compare if it is really the case of software eating everything in our beloved country.

In the context of India, let me take a look at the industry examples that Mark talks about in his article and see if software is ruling the world.

Books – While it is known fact today that Amazon would launch the store in India sometime early next year, the only other name one can think of today is Beyond the top few cities, I don’t think if anyone knows this brand or uses it to buy books.  Back in 1999, we built india’s first online music store for a startup called Fabmall. We launched the original site in 9 weeks with complete payment integration from CitiBank (This was the first payment gateway implementation for CitiBank also and I still remember the integration issues and debugging them late into night, sitting in CitiBank office in Bangalore).  They went on to add Books, Electronics, Grocery and many more along the lines of  I remember ordering groceries in the middle of the night to be shipped next day and I thought life was good. Alas, they were too early to the Indian market and couldn’t sustain for long.

What do I do for the books today? I rarely buy from Brick&Mortar stores. Mostly order kindle books and read-on laptop/tablet.  Costs less, delivered faster than any other approach and pretty much any book you can buy. On every count, they beat the physical stores.  Great, but “I” and people like me represent a miniscule percentage of India’s total population that buy books from online stores. Rest of the India continues to buy books from physical stores and probably for foreseeable future.

Music – Same story as books.  While the magnetic tape is dead in top tier-1 and tier-2 cities,  it still thrives in the rest of the population. While the younger generation has switched over to digital music, thanks to the countless clones of iPods, I haven’t seen anyone buying the music online. And where would they buy if they want to ? Internet connection at home is still a luxury in tier-3 cities and below.

In fact, when I asked my sister’s kid recently as to where she gets her music collection from, she looked at me with surprise as if the answer was obvious and replies  “Friends”. And where do these “Friends” get their music from ? The other “Friends”. This is where the Indian Jugaad comes to the rescue 🙂  You would find these enterprising guys in every corner of the main streets with a small one-room shop. Some just migrated from magnetic tape copying and CD burning business to digital versions. These guys, who usually have a techie friend (or a friend who knows a techie friend…) that can pirate latest music from torrents, would help you copy the latest music in bulk into your phone/digital player for just few rupees.

Video & Entertainment – Clearly, no one knows about NetFlix in India (except for the people like me who have directly worked with them or people working in MNCs visiting their US counterparts or people who closely monitor technology). Yes, some of us have youtube access and some download torrents regularly for latest movies etc. But to a large extent, it is still the movie halls & TV that dominate the scene. Even the online-ticketing is limited to top tier-1 cities. On the TV front, only now we are seeing some DTH providers transmitting half-a-dozen  or so HD channels.

LinkedIn –  I have been recruiting engg people in the past and I know several of my friends who recruit people into IT companies regularly. It seems that no one uses LinkedIn to recruit people although it sounds obvious that they should. We still rely heavily on the middle-men (recruiting agencies) and largely seem to be comfortable.  There are some other reasons, but I hope that this would change very soon (unlike the above)


Some interesting numbers from a different brick & mortar world. RBI estimates that only about 20% of Indian population have access to banking. Even if you take out the people within group of < 20 years, this is very sparse. With a total of about 84K branches (of all commercial and state run banks), only 5% of the villages have access to banking.

I can go on with more and more examples, but they all look like same story.  Our evolution to using the latest technology services doesn’t necessarily follow the steps the developed world has taken, and for a good reason. We probably will bump over several of the steps and catch-up with the latest. Cell phone is a good example.  This is one technology you would see in every corner of the country. You may find it difficult to find post office, internet connection, but you can pretty much expect someone having a cell phone.

But the analogy stops there. Our use of cell phone (again beyond the very small percentage of people who own connected smartphones) is pretty much limited to SMS and voice calls – what it was originally meant for.

I think that the root of the problem is still lack of widespread Internet connection. While we figured out some workarounds (digital music piracy examples), we are not gonna see a cell phone like revolution in other industries without widespread Internet connectivity.  I wonder what would drive this?

Some of us will continue to write software to eat the “worlds” out there, but not here and not now.


Self-service Configuration for Auto-scaling Cloud Applications

Before the cloud, deploying an application into production meant downloading the pre-requisite software, application code, installing pre-requisite software, installing application code, configuring it from CLI/UI, switch configuration files to use production setup (and not test setup), tweak them to access production databases/servers etc etc. If your operations guy was organized, he would have a lengthy check-list of things to do. And god bless you, if your application requires Microsoft Sharepoint, SQL Server, BizTalk etc. They had several pages of check-lists and instructions on how to set them up. Your luck falls out very quickly if your environment had two or more o these servers to be installed – with all the dependencies, service packs and what not.

But if you are one of those forward looking people and burnt your fingers earlier, you would have probably automated several post-installation configuration procedures. Cloud brings this culture of automation to masses. If you are directly using IaaS service, pre-configured VMs takes care of many of the OS and application stack installation issues (If you are running your application on top of a PaaS (Platform as a Service) service, one donot even need to worry about VMs, as the PaaS platform takes care of them. But PaaS is different issue to talk about – probably for another post).  Now, you only worry about automating the provision your application instance. And this is the crux of this post.

If you really want to take advantage of load-balancing and auto-scaling in cloud, human-driven automation of application provisioning is not good enough. And here are some reasons as to why:

  • You want to dynamically add more instances (with some constraints and bounds) as the load increases, but the last thing you want is to wake up your operations guy in the middle of the night and ask him to run the automated script on the new instances before it can go live.
  • You have deployed HA using Active-Active  or Active-Passive setup and you need the instances to come back up online automatically right after the failure and switch-over.
  • Sometimes instances go down due to bugs or memory leaks. You need new instances to be brought up to continue to handle the traffic as if nothing happened.
  • You want to make sure that your system is ready to deal with any unknown failures – as part of this, your QA/Test infrastructure needs to bring-down various application instances randomly and see if the system recovers.

To achieve the above, you need to design  your application instances to obtain the provisioning/configuration information dynamically once it has come up.  There are couple of ways to do this, in the listed priority order:

1. Get it from known location –  In this design approach, an application instance reaches out to the central configuration repository to pull-in the necessary configuration.  This central configuration repository could be your own server serving the configuration or it could be built on top of other highly available cloud services such as Amazon’s SimpleDB.

Discovering the central repository itself could happen via a limited broadcast message (within a sub-net) or by embedding the repository server identification information (as a DNS name for e.g.,) in the application instance image itself.  Most high-traffic sites in the cloud are designed this way.

2. Baked cookies – This is the easiest approach to begin with.  Use one of the VM cloning methods provided by your cloud service provider to create a golden image of freshly configured instance. And use this golden image to spin-up new instances. The only down side with this approach is that with every new patch or new version of your application, you need to re-create the golden images.

Many well-known and high-traffic sites like Zynga and Netflix use similar techniques. Back in apigee days working with Netflix as our customer, I remember they using a mix of both the approaches for different instance types (second approach for soft appliance and the first approach for the application server instances).

Once you have figured out how you can bring up your instances without requiring manual/human-driven automated scripts for provisioning, solving the above use cases is trivial matter of  working with your cloud vendor’s load balancer ( for e.g., Amazon’s ELB) or cloud management system (such as RightScale) to configure for auto-scaling and high availability.

Provide free access to your whitepapers

In this age, when one can get quality information from various sources in real-time, why is that some companies still hold on to the age-old technique of asking users to fill-up a lengthy registration form to download or get access to company whitepapers?  Worse, some companies force you to do this for every single white paper – there is no sense of a session or login or an account, even one is prepared to go through that trouble.

Look at this example on the right. Why would anyone give out so much of information just to access some whitepaper. They are all mandatory. Knowing that most of the whitepapers are vendor solution biased, why would I get motivated to fill in this time sucker?

Also, look at the context as to how people find these whitepapers. Rarely, you go to a company and read the whitepapers. I mean, eventually you may do if you like that product or space and want to know more. But in reality, you come across these whitepapers when you are searching or reading some other article or a blog post. People don’t like to change the context and jump in the middle of the article to some other whitepaper and start reading it. I typically, right-click and open it in a separate tab (or download) and get back to that queue when I am done with that article.   And when you are back on to these new tabs, and find that they are all asking you to fill-in the forms, what chances are that one actually goes through them? Nah, I just kill those tabs and move on.

See you lost that opportunity of  few minutes of my time!!

What is surprising is that even some companies who are in the business of security, identity, federated authentication etc  adopt the same ridiculous fill_in_the_form_first approach. Why is that a simple email is not good enough for you?  If registration is so important to access these articles, why not implement OpenId or a similar federated authentication technique.

If the goal is to get as much information about the user as you can before he gets access to whitepaper so that you can send whole bunch of unsolicited spam, then you are already lost.  Nobody likes that. There are better ways of tracking other demographics about the interest level of users on your whitepapers.

Comments on “How Cloud Computing Will Change IT: 6 New Predictions”

Bernard Golden, CEO of HyperStratus, wrote an excellent article on his 6 new predictions for IT. For once, I really liked this “prediction” article. It is backed with some good experience and foresight.

He predicts that the cost of IT components (not just hardware, includes software as well) decreases precipitously. Referencing Jevon’s paradox, he also talks about how this results in increase in total IT spend( instead of decrease).

As I was reading, it occurred to me that as the confidence in SaaS applications grow, there will be a significant shift to move more and more enterprise IT applications to SaaS. But in the near future, IT still needs to maintain legacy applications.  Further, with ever growing SaaS applications in every possible domain one can think of, the cost effectiveness of maintaining legacy applications will be a big question, resulting in tighter budgets.

While the hybrid cloud (“application stretching”) solutions can help IT leverage cloud economics for legacy apps, but they cannot be a permanent solution. In the absence of an equivalent SaaS application, the only long term solution is to look for a green field approach i.e., build the application using cloud technologies.   Unless IT upgrades the current app development skill sets,  cloud application development is going to be a tough game to catch up with. PaaS platforms could significantly reduce the steep learning curve. Since not all enterprise applications have the same scalable requirements as that of Web2.0 applications, I wouldn’t be surprised to see newer PaaS platforms hiding all the cloud stuff underneath and provide the familiar enterprise development environment/frameworks. Infact, RedHat’s OpenShift is a good example of steps towards that –  they are probably the only known PaaS vendor with J2EE support. Microsoft’s Azure is another PaaS platform to watch out for enterprises who invested in DotNet technology.

Another interesting aspect to look at is that as more and more SaaS services are adopted by enterprises, there is going to be definite need for IT to develop custom applications that integrate services from many SaaS applications. One approach is look at solutions like IBM’s CastIron. Another approach would be an interesting opportunity in the PaaS area i.e., a PaaS platform focused on integration in the cloud.

Dalai Lama on Man’s Life

The Dalai Lama was asked what surprised him the most; he said,

“Man, because he sacrifices his health in order to make money. Then he sacrifices money to recuperate his health. And then he is so anxious about the future that he does not enjoy the present; the result being that he does not live in the present or the future; he lives as if he is never going to die, and then he dies having never really lived.”


Source:  Internet – self-improvement meetup group.


Simple, but true for most of us!!