Saturday, February 21, 2009

Real Time Web

Want to understand implications of real time web? read: http://bit.ly/XjCKZ

Saturday, July 26, 2008

Cloud Computing and High Availability

Last week, the fail whale, a concept that has become associated with the recurring Twitter service’s outages, swam across the north pacific and hit Amazon’s S3 service. I am talking about the already widely discussed outage of Amazon’s S3 service. It is fair to say, the services dependent on the Amazon’s S3 services – i.e. polvore.com – really felt the “business and user impact” of the outage. Did the users of those dependent services really care that those services were using Amazon’s S3 to save costs? Of course, they do not. The dependent services wrote apologizing blog entries, and never-ending debates of pros of cons of the cloud computing started yet again. But I won’t bore you with yet another synopsis on the outage.


UPDATE: Yesterday, Amazon did a great job of being transparent with the issue that caused the outage.


However, as a technology product leader, who also runs a software-as-a-service product at IBM, I am always faced with new challenges related to the shared application code based and more importantly, the shared application infrastructure. It is a no-brainer that specialized services (i.e. Amazon’s S3) always can do better job at lower costs than the individual internal IT services could do and cost. But at the same time, most people do forget to realize that the more clients the cloud-based services get, the more the impact will be felt during an outage. Therefore, with the increased usage of the service, the tolerance of a failure goes to zero, and uptime expectations go through the roof. Mathematically, we can represent it as: Cloud computing uptime expectations = number of clients x cost the service. Amazon’s S3 had an outage. But is that an anomaly? No. If your answer is yes, you have never run a large-scale system. However, the impact of the Amazon’s S3 service was unbearable to most of its clients. Again, please keep in mind, cloud-based storage means nothing to the users of FriendFeed, Twitter or Polyvore.com.


I am a big proponent of both infrastructure cloud computing services and software-as-a-service applications. However, this Amazon’s S3 outage got me thinking as how we as an industry could come-up with a solution. We know it does not matter how much redundancy a distributed cloud-based system has, some day, some thing does break. So, the obvious armchair architects’ solution of having redundancy of disks, servers, unbreakable distributed system design and other infrastructure elements just won’t avoid another outage.


I think one possible solution could be as the interoperability of the cloud-based infrastructure services. The concept is analogous to SMTP and POP protocols for the email-based services. Let’s take an example of online storage. Amazon S3 and participating competitors would agree on a standard API to retrieve and store data in the cloud. Users would select the service based on their criteria initially. S3 and its competitors could offer an “extra insurance” of redundant cloud storage feature at the time sign-up. With the feature, the users could choose the cloud of a competitor of the selected company as a “redundant” cloud in case the selected company’s cloud fails.


Now, this solution has not gone through any deep analysis and is more of a random thought. But I do wonder the other factors that could play into it. The companies would have to compete hard to keep the customers as they will be one click from switching to the competitor and perhaps making you the “redundant” cloud. Another factor could how someone would cost the service of being redundant? X% of the primary service and full charges during the failure of the primary provider? Also, what would the economical advantage for the companies that interoperate with each other versus the ones who don’t cooperate? Open source foundations – i.e. Apache Software – have pioneered the standardizations among a lot of locally installed software. Will we need a similar foundation to manage the cloud-based services interoperability?

Saturday, July 12, 2008

Public Companies and Wall Street

Since January, I have followed Microsoft’s Yahoo! acquisition proposal, and then withdrawn, and then semi-proposed [search only], and finally the ending of the discussion. And the last statement to come back to the deal table only if Carl Icahn is able to replace the Yahoo! board. In between all of this, Yahoo! lost most of its senior executives and executed another re-organization; the executives of the two companies issued conflicting statements, and blamed each other for tanking the discussion of a merger or partial acquisition.

In all of this, I have also concluded that that the Wall Street’s never ending desire to make as much money as possible [in short term] provoked discussions and actions that otherwise would have been much more civil, less controversial and could have resulted into a friendly good deal also.

So, I do wonder. Yes, we all want to make money. But would anyone be ok making money by selling his soul? In case of an Internet company, the soul of the company is its products and users of those products. If your CEO makes a statement that creating the shareholders value is the most part of his job, isn’t he putting the money before the soul – products and users? So, is Wall Street capitalism such a vicious spiral that the more you spin around it, the more you care about just money, and just ignore the products and users – who could make or break your business?

None of us will get to know the real stories of the meetings that happened between Microsoft and Yahoo!, but personally, I am just disappointed on how Yahoo!, Microsoft and Mr. Icahn have handled it. Microsoft approach made it hostile. Yahoo! has gone to the point of begging for the deal. And Mr. Icahn just wants to make money of the stock he has bought. In all of this, no one really cared about the product overlaps and resulting confused users.

Perhaps, the reality of the Wall Street capitalism is to torpedo the companies through its greedy approach of short term gains. And the system recovers itself as new companies come along and users move on. In the Internet, we have seen that happening to AOL, Excite and other early Web 1.0 portals. However, I do consider Yahoo! a bit different as it still does have the right talent to make it happen. At the same time, the recent departures of executives and the stories of technical employees leaving for greener pastures could make it difficult if too many people do end-up leaving. Microsoft, which still loses money in its Internet business unit, is not the right answer due to their vast cultural differences. And lastly, I still think, the companies are too deep in the vicious Wall Street stock price cycle to come out of it and make best possible decisions for the users and products.

When I started working 8 years ago, I always wanted to complete my project as early as possible to move to the next one. And that attitude resulted into some bad decisions that gave me life time lesson of “There is no short cut to success”. Therefore, I strongly believe that the involved technology companies, who are competing in this hyper competitive environment, can still bounce back and slowly become a very strong player. All it would take is the right leadership, technical talent, and maniacal focus on the long term aspects – products and users – over the Wall Street short term forces.

Saturday, July 5, 2008

PC Migration in the Internet 2.0 Era

Last week, my company’s 4-year “forceful” auto refresh program dispatched a ThinkPad T61 to replace my 4-year old ThinkPad T41. The company’s policy is 3-year refresh cycle but I was too lazy to ask for a new one in the last 12 months as I really didn’t want to go through the painful and time consuming migration process. Additionally, my ThinkPad T41 had had been very stable and durable, except when I spilt tea on it twice resulting into motherboard and keyboard replacements.

I was apprehensive of the migration because I thought it would be as painful and time consuming as my previous ones had been. Application re-installs (and who had all those CDs?), CD/DVD burning of my data, and re-configuration of so many programs. I was so dead wrong.

Here is it how it goes. As soon as I got my new PC, I dug up my text file of the “PC migration tasks” and started going through it.

My PC was already loaded with the corporate image containing all the security and office software so I crossed them out quickly. Also, 3 years ago, I migrated to the outside disk drive “continuous data protection” solution to backup all of the data from my user direction. The new ThinkPad immediately pulled the multiple GB of the data in just few minutes from the external disk drive over the USB 2.0 port. So, there was no need to burn the data CD. But the shocker for me was the no more need of the web bookmarks. I had stopped using del.icio.us bookmarks as I had started to browse the entire web through the Google Reader. And FireFox bookmarks were also not needed because my habits had changed. I just remembered every main website (yahoo.com, google.com, etc.) because I visited them everyday. And the rest of my web browsing was either through Web search or Google Reader search in case I wanted to visit an article I had either tagged or had in the back of my mind. So, this PC migration signaled the “death of web bookmarks” to me.

Thereafter, I went through the software installs on my PC. Out of that, I had stopped all of that because either they were programming tools (I transitioned to product management full time 4 years ago) or just desktop tools that were not needed in the era of Internet 2.0. I moved to the Quicken online in lieu of the installation, and had already uploaded my pictures to the Flickr in lieu of local software. I had stopped using the MSN, Yahoo or Google Talk as my company’s communication took place exclusively over Sametime. And I rarely found any time to do personal instant messaging. I preferred phone text messaging, voice calls (yes, my mom still wants to talk to me), twittering, Facebooking, and friendfeeding. I questioned myself. Do I really those IM clients? Not really but I still took few minutes to install them. Lastly, the most time consuming task was the migration of my lotus notes (yeah, we are mandated to use that) local connections and references to the team rooms over to the new PC.

Lastly, I had to install iTunes as there was no cloud-based version of that. I was disappointed more because Apple’s iTunes also didn’t allow me to download the songs from the iTunes cloud. I had to manually copy from the old PC. I did wonder as when we will see the iTunes to be in the cloud and allow us to just change PCs and de-authorize the old PC through the web. May be Apple needs some competitive pressure to work on it?

Now, if I was a programmer, my migration would have at least included a compiler / JDK installation and code editor like Eclipse. I doubt that compilers will go into the cloud but I do wonder if the local tools (i.e. Eclipse) would just be able to store the configuration of the workbench in the cloud and just retrieve it on the 2nd PC. May be they already do that as I don’t know as I don’t use the programming tools anymore.

All of the above took little over an hour (lotus notes took most of it) and I was ready with the new laptop. I immediately fired-up FireFox and was surfing the web.

So, going forward, I won’t be apprehensive of changing PCs. If my company moves the mail to the web and apple moves iTunes to the cloud, I would be 100% cloud-computing (my local external drive is part of that cloud) compatible.

Sunday, April 13, 2008

SCCC 2.0

Santa Clara Cricket Club 2.0-
The link is the official communication of the technology transformation of my cricket club from the internally-developed hosted custom web applications era (Web 1.0) to the hosted SaaS web applications era (Web 2.0)

Friday, March 28, 2008

Leading Indicators-based Product Management

Carly Fiorina, the former CEO of HP, said in one of her speeches, I paraphrase, “The companies that survive in long term are managed and measured through leading indicators versus lagging indicators. A company’s quarterly results denote a lagging indicator because they represent the past decisions”.

I believe that Carly’s aforementioned quote is a very important principle on how one would manage a team, a company or a product. At work, I lead a large team to develop, maintain and continuously improve a large software-as-a-service product. And everyday, my team and I collectively make a lot of decisions on the product’s direction and day-to-day operations. However at the end of every day, I always think hard as did we make the right decisions that day? Did we make sure that our decisions will work in both short- and long-term? Did we make sure people understood how those decisions will be carried out? Would our customers like the changes made through the decisions? Would our employees accept that change that was associated with our decisions? Would we achieve the product vision?

After understanding Carly’s approach on leading indicators-based management, I have concluded that as far as our decisions incorporate the leading indicators , we will be fine for the most of them. Personally, I always try to approach all product decision discussions with the following leading indicators in my mind.

Follow the Users

This leading indicator has been proven again and again, and the consumerization of the technology is taking this to a new level. If the product features are not continuously developed and enhanced based on the users’ feedback, the product will fail. The SaaS model and the Blogosphere have made the users’ feedback based development very fast and highly effective. Do we really need old user group meetings and conferences? I don’t think so. I believe the Blogosphere can provide instant feedback and SaaS model has enabled instant features deployment and beta testing.

Satisfy the Existing Users

This leading indicator has been proven more than once even at a very large scales. A classic example is AOL. At the start of the Internet, the AOL portal was the main hub of the early Internet users. Today, you go ask school kids about AOL – I can guarantee 95% responses as “what is AOL?” In contrast, the word Google would have the opposite response. So why did AOL lose the brand when it had a head start of almost five years against Google? Simple answer: They didn’t satisfy the users and with a single click, the users switched to the better websites.

Prepare for the Growth

In February 2008, Yahoo! Live was launched the live videos - well ahead of the video industry leader, YouTube, which will release the live video sometime later this year. However, the Yahoo! Live service went down the first day of its go live day and got a bad reputation from the start. What a colossal mistake! Yahoo! simply failed to understand the contemporary users, who don’t have any tolerance for a product failure until they really like the product. Yes, the occasional outages are tolerated, but only after the users like the product, not before they can even get to use it. So, this indicator requires us to always plan the infrastructure for growth. If you cannot sustain a world wide go live, stage it by country. If the country’s population is too much to contain, do a limited invitations-based beta. As an example, though, I cannot confirm this; the Gmail product entered the market through invitation-only approach. I would speculate the creation of a scalable product infrastructure could be one reason behind the invitations-only approach.

Sunday, March 9, 2008

The Consumerization of Enterprise IT

Nicholas Carr, a disruptive technology author, writes in his latest book, The Big Switch, “A hundred years ago, companies stopped generating their own power with steam engines and dynamos and plugged into the newly built electric grid. The cheap power pumped out by electric utilities didn’t just change how businesses operate. It set off a chain reaction of economic and social transformations that brought the modern world into existence. Today, a similar revolution is under way. Hooked up to the Internet’s global computing grid, massive information-processing plants have begun pumping data and software code into our homes and businesses. This time, it’s computing that’s turning into a utility. The shift is already remaking the computer industry, bringing new competitors like Google and Salesforce.com to the fore and threatening stalwarts like Microsoft and Dell. But the effects will reach much further. Cheap, utility-supplied computing will ultimately change society as profoundly as cheap electricity did. We can already see the early effects — in the shift of control over media from institutions to individuals, in debates over the value of privacy, in the export of the jobs of knowledge workers, even in the growing concentration of wealth. As information utilities expand, the changes will only broaden, and their pace will only accelerate”.

I could not agree more. This change is very disruptive and will shake things upside down. However, no body would argue of difficulty of challenges that lie ahead of us to make this change in the enterprises. Historically, enterprises have always resisted a change due to a combination of antiquated controlled senior leadership styles, job security fears within the middle management, and a simple resistance to adapt to the new ways among the employees. So, the question is simple: How can we enable our enterprises make this switch? And those of us, who have worked in enterprise projects, know this very well that enterprises’ users love the “custom” solutions. Their “requirements” result into customizations of the off-the-shelf software products or in some cases, development of custom software projects. All of those customizations and custom projects are expensive to develop, are mostly late and over budget, and have a steep maintenance cost – yes that gives job security to the same developers. Consequently, the contemporary enterprises have IT budgets in millions of dollars and are turning to off shore outsourcing to cut the costs.

So, is it simply impossible? Do those enterprises really need to restart? Is the roll out of a new strategy the magic answer? Well, I think, restart is not an option for almost all of the companies. The roll-out of a new strategy is necessary but not sufficient. I believe the answer lies in what pundits are calling “Consumerization of Enterprise IT”. Basically, the same consumer Internet-based technologies that have made us Internet-savvy users will make their way into the enterprises to simplify and standardize the enterprise IT systems. However, from my own experiences, I have witnessed two broad challenges that need to be overcome to move enterprises from the existing customized enterprise IT solutions to the standardized online software solutions.

First one is the features (or lack thereof) and limited configurability (by design) of the online software. In terms of features, I would say it is a function of time when the online enterprise software catches-up with the local enterprise software. I would give it three-five more years based on my own research. However, as it happens with any new technology, some customers will embrace the online software now as it is perfect enough, though not perfect yet. As a result, this will give them a jump start against the competitors later down the road. In contrast, the limited configurability is by design the underlying principle of the online software. Why? We all know from this product world, if a company cannot replicate a product through a standardized model, it won’t make profits and its customers will not get a low price product. Let me illustrate this using an example. Imagine a world without a standardized way to drive a car (gears, dashboard, accelerator, break’s positions etc). If the car industry had not standardized on those standard elements, the cars would still be expensive and less in use as they would have required too much training to drive, almost impossible to switch etc. In the similar fashion through standardization of using the online software for the common processes - HR, Accounting, Procurement, and Contact Center - the customers will reap the benefits over time, and avoid costly upgrades resulting into business disruptions. Of course, the core business processes are exception to this rule. What is an example of a core process? How an Airline determines its ticket price for a particular flight. How a car manufacturer sets-up its robots-based assembly line to manufacture more efficiently etc.

The second challenge to overcome is not about technology or business processes. It is about people. We all know that to make a change in enterprises is like making elephants dance. So, this is where I believe that the proliferation of Web 2.0 based consumer technologies is going to help us. Let me give you an example on this. In circa 2002, the Santa Clara Cricket Club, where I serve as a CTO, had a very manual email-based players’ availability management process. At that time, I would take an initiative and developed a custom Java-based web application to manage players’ availabilities. It was a runaway success. However, over time, the home-grown application started to out-grow itself as I didn’t have much time to update the code and infrastructure with the latest changes and security fixes. Fast forward to 2007, the application was considered antiquated. It didn’t integrate with any portal. It had a steep development cycle for minor change requests. In summary, the application was hindering the Santa Clara Cricket Club’s growth. Fortunately, around this time, the outside world had changed also. We had Google Apps available to us. One weekend, my colleague and I spent few fours configuring Google Apps’ Calendar for our club members. And the next weekend, we moved all of our user accounts to the Google Apps and with a switch of button; we abandoned our old availability application. This was a risky bet. What would happen if the users could not use the standardized calendar-based availability? As it happened, almost everything went smooth. We had user ids and passwords problems but nobody reported any problem using the application. More than hundred users were able to adapt themselves to the Google Calendar without any training. How did that happen? Isn’t that dream for our enterprise applications? Well, it all happened because Google Apps’ calendar was similar to the online calendars that our users were using to manage their own personal calendars. So, our users’ experience of using consumer technologies resulted into a smooth cut-over to the new enterprise app without any hurdles. What was my conclusion from this experiment? The consumerization of the online enterprise software will be the most profound way to overcome the challenges in users’ adoption of the online software over the in-house customized software.