Thursday, December 22, 2005

Always-On: Re-engineering the Read/Write Web and the Enterprise (with examples)

Last week went down for several days, creating a domino-effect of failures all across the web, as many sites (myself included) were mashing-up the APIs for information retrieval. This was an acute reminder of the inherent, hidden fragility of SOA implementations – Enterprise or WWW alike. But this time, I am going to offer a solution.

This will be a longer post than usual, but I hope it will be rewarding, as I am going to present you with a most unusual requirement for an SOA implementation, which if met properly can change the way you think SOA and bring salvation not only to Enterprises, but also to the not-yet-matured Read/Write Web.

In early 2001, I designed the first version of Orange's bespoke SOA framework. I was obsessed about availability & reliability, as the SOA hub was about to become the central execution and routing engine of a Telco company, meaning downtime was not an option. I remember a meeting I had with Orange's EVP of Technologies, who made it clear to me that if this SOA stuff is not going to be as reliable as an Ericsson switch, then he wouldn't approve it. And to remove any shadow of a doubt he explained in great details what he meant by an "Ericsson Switch": when a voice call is created in the telecom switches, there are always two switches involved - a master and a slave. If the master fails, the conversation continues uninterruptedly by the slave, which is keeping constant tracks of the conversation at the master. "That's the availability and reliability I would like to have from your SOA hub", he concluded.

I was fascinated by this engineering-oriented manager, to whom IT was an eerie, money-sucking dark force, and yet he got the essence of SOA in matter of seconds, doing the correct analogy to his familiar landscape of telecom switches and IP routers.

So we have engineered our bespoke SOA hub to yield such a fabulous availability and reliability, making it suitable for Telco-grade operations. But this is not the unusual requirement I mentioned at the beginning, so you'll need to hang on with me for a bit longer.

When we have launched, our bullet-proof SOA hub was indeed as reliable and as available as designed. All across the Enterprise, departments started to use Services, to build mash-up applications, seeing Time-to-market shrink and Productivity grows.

But one day everything crashed. The engineering guys and all the rest of my lovers had their day.

Everything crashed, but it was not the SOA hub that failed. One of the systems providing a most popular Service was not responding for whatever reason. Without getting into too many details – this failure created a chain of other failures and we ended up with a crashed IT.

Yet, everybody was looking accusingly at us, the SOA framework providers.

"Guys, wake-up", I said, "one of the systems was down - it was not the SOA hub! You don't really expect that I will guarantee that the service provider is up and running - that's impossible! I'm just the plumber, the BUS. If you take a bus to meet a friend, the bus driver cannot guarantee that your friend will actually wait for you when you get down at the station".

"Man", they said", "you introduced this Services façade claiming that we no longer need to mess up with any system besides our own. Well, we bought into your story, and now you're coming and telling us you cannot commit? That's not going to happen: either you commit that whenever we access a Service – it’s there - or get out of our way and let us build our programs the way we used to before you came with your SOA stuff".

Although this dialogue was never vocally pronounced, it has become clear to me that by providing an Enterprise SOA I was expected to assume an Enterprise responsibility. Naturally, that is beyond the scope of any SOA framework. No SOA supplier – us included :) - can guarantee availability of the Service providers. But that was the unusual requirement I was facing: provide an SOA framework, in which the Services are always-on. How do we do that?

Some comments I got from my colleagues at the time, as well as from my current customers and from ISVs (IBM…) to whom I presented this challenge:

"Well, Providers shouldn't be down! make them highly-available!"
"Use clusters!", "Use Oracle RAC!"
"You have lousy systems architecture if your mission-critical applications fail!"
"The Mainframe never fails; we put all our stuff on the mainframe."

And on and on it goes.

All the advices were provider-oriented and optimistic. Provider-oriented - because they claimed something has to be done with the provider in order to guarantee an always-on Service; Optimistic - because they assumed that once the provider is fortified it will always be on.
My experience taught me that pessimistic or paranoiac designs are better. As a rule, I believe systems should be allowed to rest! There are upgrades (of OS, DB, App Server); there are bugs, human errors, lack of procedures and disasters of all kinds. So obviously, we could invest some 5-10 millions dollars and make each service provider theoretically bullet-proof (human errors and disasters can always happen). Differently put, a pessimistic planning is not a bad idea.

In the search of the always-on Service' solution, our focus has changed. If we were to accept an axiom in which applications were allowed to R.I.P, then applications could no longer play an important role in the architecture. Applications have become OPTIONAL to the Service execution. The only way to cope with such a requirement was to shift our focus from Applications to Information. We concluded that we had to protect the Information, not the Applications.

We then looked differently at the Services we got; no longer as facades for processes invoking application functionality (APIs), but rather as Information Retrievers or Information Modifiers. We have realized that Services which are Information Retrievers are most popular and less tolerant to failures, which means they were part of a synchronous transaction. In contrast, Services which were Information Modifiers were less popular and highly tolerant to failures, meaning those who consumed them could get along pretty well with a later execution [as a result of a provider downtime]. This analysis formed the basis for our solution: the Information required by Information Retrievers had to be protected in an always-on manner.
A year later, our Enterprise SOA framework had an additional construct – let's call it “Google”. Information Retrievers were not redirected to the applications that created the information but rather to our Google, which was kept up to date in a [near]real-time fashion and was also designed like an Ericsson Switch. :)

I am keeping some more professional secrets to myself… I hope, though, that the general idea is somehow clear.

I think that 2005 proved this approach to be righteous. The focus of the entire industry has been shifting from Applications to Information. We all agree today, that there is no need to visit a Web-Site (an application) in order to have the Information we want/need. Through Information syndication all the data I need comes to me. But the failure is a warning sign for this remix generation (a term coined by Vinod Khosla). failure proved (at least) two things:
1. That the value of Information is subjective - what one considers noise, other considers gold. The downtime was undoubtedly painful for some individuals, while others couldn’t care less.
2. That we have to have the most pessimistic approach regarding Information protection and that we have to assume responsibility in a global manner. We cannot leave this responsibility to the web-sites owners. Web sites, like applications, should be allowed to rest, but the Information they got must be always-on.

It's time, then, to Google. "Google is the only globally scalable distributed system that can handle all information in all languages all the time". This sentence is taken from an absolutely fascinating presentation by Mr. Steele, titled “Steele on Intelligence - What can we know, how? Reflections on the near future. Google versus the CIA—Five Year Outlook”.
I am joining this observation and suggesting that Web 2.0 applications would be built around the concepts of Information Modifiers and Information Retrievers. The following re-engineering would exemplify it: will provide an Information Modifier API, in charge of creating/modifying/deleting the information inside repositories. Once the information is modified, it will be captured, analyzed and stored by Google (either push via a Google API or pull via a Google appliance). will provide an Information Retriever API that would be hosted by Google and retrieve the information from Google. The Information Retriever API will give the Googled information the required look and feel and other needed aspects of Information presentation.
If is down then no Information could be modified. This in itself might be unpleasant, but it is certainly not as catastrophic as not being able to retrieve the Information that is already there – and for that we got Google.

For Google's never down.


By Blogger Philip Hartman, at 7:40 PM  

Why was I reading techie stuff on Christmas Eve? I don't know (and shame on me), but I really enjoyed your post. It was very timely as well as more and more SOA-related discussions are on the horizon for me.

By Anonymous Gabriel, at 11:53 PM  

Great post, really! Thanks a lot for your essay

Post a Comment

Monday, December 19, 2005

The day went down

This phenomenon deserves its post: a selection of [tagged] comments from the blog.

tag: addicts

- uarghhhhhhh uarghhhhhh can'

- badtrip. i'm a delicious addicted.

- oh God, where the, i need it right now.

- i miss you so much! hope you come back soon

- Sorry to hear about the problems. This is good for me personally though. I'm an addict and this is a good time for a break.

- This is painful. Please hurry.

tag: ?!

- it's getting late, I have six sites that I need to bookmark and I don't want to leave the computer on the entire night. What to do, what to do? No worries my friends, I'm staying up all night 'til the service is up and running. Happy Holidays to all the good folks at, and the Yahoo! folks as well. :)

tag: !yahoo

- The horror has begun. This is what you get when you sell out. Yahoo blows, who uses Yahoo anymore anyways? Why didn't they disappear with the rest of the dot-busts?

tag: authorities

- Attention: It is very important that everyone REMAIN CALM. Do not panic. Keep a bag of peanuts and a towel handy at all times during this crisis. We can get through this if we all take a deep breath, relax and remember that your browser has a Favorites option built-in. You can make a folder called delicious if you want and bookmark your websites there. I know it's not the same, but it will get you through this terrible time. You can do this. I believe in you.

tag: outsiders

- You do realise you can still bookmark locally and then shut down your computer right?
Oh, and you could always search for what you're after, you know like we did last week before existed. I've never seen so much wining over a non essential and free service. It's absurd.


Post a Comment

Friday, December 16, 2005

Organizing the Information of the World

In this post I'll explain through a comparative analysis of Gspace and G2G Share, two Google-based web 2.0 mash-ups, why Google is the Real-World SOA*, and why we should be alert and watch our steps.

We have an intuitive-protective understanding of the sentence "Organizing the Information of the World". This "Information" is something that sits in web sites, congress libraries and dusty government archives; it has nothing to do with us. Google's mantra is therefore noble and promethean: bringing the light of information (knowledge) to the human race.

But if we consider Information in its pure sense we must face reality as it is. "We" are Information. What we are - is Information, as well as what we are not. Our life-style, our census data, our files in our desktops, our click-streams (recently re-branded by Steve Gillmor as Attention or Gesture), and one day – our thoughts. These are all Information.

Google's well-known aspiration is to "Organize the World's Information" (Peter Norvig: Inside Google). Whoever organizes the Information of the world is in a potential position to control the World, and that's already well understood (GoogleWatch).

Currently, the Googles (Google, Yahoo, et al.) seem to "just" want to monetize their knowledge, by means of selling us user-friendly, ubiquitous services (advertisement or subscription – it's the same). The future will tell what else they "just" want.

Now that Web 2.0 is around, an interesting play is emerging. On the one hand, it allows the Googles to offer us mashed-up services. But on the other hand, it allows us, no less, to mash-up their services. When that happens, the Googles lose control. As some Gillmor Gangers described, the Googles want us to be part of their process. We need to have a world in which we are in charge of the process.

You see, Google is offering us an ever growing number of services. Hardly a week goes by without Google introducing some new services, features and hacks. They aren't doing this to make us happier; we are the resources at the end-points, mediated through their services. Their Service-Oriented Management & Control System knows who's behind the service (you, me) and controls us through Google processes. Would they allow us to have our own processes on our own(!) Information, once 'organized' by Google?

By doing a comparative analysis of two Web 2.0 Services - Gspace and G2G Share, we would be able to answer this question at once.

Firefox 1.5 introduced an extension named Gmail Space or Gspace, that allows turning a gmail account into a virtual 2GB drive (that's not a new idea but the first to be ultra convenient). One could have thought that Google would prevent such an abuse of its storage space, but they actually didn't. And why? Because it serves well their Control through Information strategy. You place your files inside the Google's vats, and with that, you made yourself more exposable and analyzed. And what if you place files which are violating digital rights? Silence.

Now let's talk about another service, G2G Share, offered by a 17 years old teenager named Robbie Groenewoudt. This service is the logical evolution of Gspace. If my files are already loaded into Google storage, why couldn't I share them as I see fit? Here's an excerpt from NewsForge Teen teaches Google to share describing G2G Share:

A PHP script logs into subscribers' Gmail accounts and makes a list of all the files there, then publishes them with links on the G2G Share Web site. Anyone who visits the site can search for and download any files they please.
"If someone wants to download a file at your account, the system accesses [it] and forwards the mail with the file," says Robbie Groenewoudt, the 17-year-old author of G2G Share. "Everything is done by the system and no user will ever see any passwords." Gmail's labels serve as file indexers, and mail account holders can specify which labels are shared on G2G and which remain hidden.

This time, the information creation process is controlled by the users; Google doesn't control it. And Google's reaction was to shut down the G2G Share site. Google revealed here its real, dangerous face. It turns out that the Information we store at Google is not ours. We cannot share it how we want, with whom we want. We might be able to do so but only if it's part of a Google process.

In Dave Winer's public stream of consciousness I found the following bits: "So here's an idea, let's start a company, hire some great people to run our database. Instead of being the users in "user-generated content," we'll be the owners".

Let's start the revolution.

*This is the concluding post in the "SOA as a Management & Control System" trilogy (first post: SOA, Matrix, second post: In SOA We Trust).


By Blogger Adi Hirschtein, at 5:50 PM  

More frightening is the fact that there is no privacy any more, our information is being monitored and analyzed by google,gmail has an engine that read and analyze the information we send and receive via our account and they put their advertising accordingly so only god knows (if he has gmail account) what else they can do with all the information they know about us.

By Anonymous descent, at 2:02 PM  

G2G, the way you present it, appears to be a huge security risk - a server with the emails and passwords of gmail users, ran by a 17-year old? If this system were compromised in any way, untold numbers of gmail users' accounts would be exposed. Then groups with truly malevolent intentions would have a treasure trove of personal data for fraud, identity theft, and more. Google allows and encourages services that encourage the personal use of information - gDrive is only accessible by the user. The fact that you can download your entire gmail account flies in the face of the idea that Google wants to control information through their services. Information is still ours, Google just provides tools to help us access it better.

Post a Comment

Tuesday, December 13, 2005


Last month I listened to a podcast by Dan Farber from ZDNet, SAP's Shai Agassi: Unplugged, in which Agassi is promising that future releases of SAP will have ten to twenty thousands Services. A week later, in an InformationWeek's article titled SAP's Architecture Shift, AMR research analyst Jim Shepherd was quoted saying "Until they [SAP] take that giant application and break it into thousands of Web services, I don't consider the application a service-oriented architecture platform". (!)


So far I had to struggle with bespoke Services proliferation; having additional tens of thousands of services from SAP, Siebel et al., sounds to me like an IT torture fantasy by some IT Outsourcing conglomerate.

Proliferation of Services prevents a reusable, sensible, enterprise scale adoption of SOA. What exactly should I be doing with 20,000 services? Build IT Applications around them? Read their specifications? Juggle with them? Juggling is a good idea.

For one thing must be said straightforwardly: Services proliferation is not going to shorten IT Time-To-Market, nor is it about to imprve productivity or software maintainability.

Clearly, a higher abstraction-level that will reduce the number of available Services is required. I suspect, though, that the Packages Vendors (such as SAP) would argue that these Services are "already" coarse-grained and high-level and that they represent not a single function, but rather an entire Business Process (such as the "Track Shipment" mentioned by Agassi in the podcast). And in an Enterprise, thousands of Business Processes exist…

If that's the case, then we should probably rethink the whole thing: no longer a coarse-grained Service per Business Process, but rather a coarse-grained Service per Business Entity. CRUD Services on Business Entities. "Track Shipment" is nothing but a Read operation on the "Shipment" entity, asking for its "Location" attribute (+ the attribute's historical values).

Naturally, these coarse-grained Business Entities Services should be built on top of finer-grained Services – and that's fine as long as they are not exposed to application developers.


Post a Comment

Thursday, December 08, 2005

In SOA We Trust

In SOA, Matrix I have described SOA as the foundation for the ultimate Management and Control System (unlike the common conception of SOA as a mean to enable Integration, Interoperability, Reusability etc.). The Management and Control market reveals a deserted landscape, incapable of managing and controlling complex systems, such as an Enterprise, a State, or the Internet. SOA is the first architecture to provide the foundation for complex-systems Management & Control. It is doing so by focusing attention – not on the Objects that make part of the complex-system, but rather on their Conversations. Pretty much as the NSA is trying to protect the USA, not by tracking down each individual, but rather by capturing and analyzing conversations, SOA is intrigued, not by the Service Providers, nor by the Services associated with them, but rather by the Business Conversations generated by the Services: what they say, when they say it, to whom they say it and where the Conversation is taking place. This linguistic reductionism deliberately ignores Objects' variations and Objects' internal intricacies in favor of Business (aka, the Complex-System) syntax, semantics and speech acts – or briefly, in favor of text analysis.

If we accept this axiom, we could then point at THE paradigm shiftER that has enabled this revolution. And no, it is not the Web Service (see, by the way, its REST and [in a sense] RSS competitors – does the fact that Web Services are no longer alone undermines the concept of Services, or of mash-ups, or of tagging or of web 2.0? Of course it does not. Hence, it's not Web-Services). I would agree, of course, with Dare Obasanjo that it is not even XSD :). The SOA revolution has been lit up by the simplest form of… XML. It is not a particular XML standard that has changed the way we do things, but rather the simple fact that Resources have started to converse by exchanging TEXT, and that this TEXT could be meaningful in a context of a Business Conversation.

Under this light, the idea presented in SOA, Matrix should be more comprehensible. If SOA is a linguistic phenomenon, then Services are Signifiers and Resources (Service Providers) are Signified. The semantic relation between a Service and a Resource (a Service Provider) is, like in any natural language, Arbitrary. Having said that, two implications arise:

A. the Service is indifferent to the Resource that implements it. In different contexts, the same Service could be realized by different Resources.

B. It also implies that a Resource can realize different Services.

Hence, the following statement, which I see so many times, is simply wrong and incorrect: " exposed by an application (such as SAP)". Services are never exposed by an application; APIs are exposed by an application.
Services are always Words in a (specific) Business (Enterprise) Language. It is the Business that decides on the creation of Business Speech Acts (Services) and on their association with certain APIs. Naturally, the confusion should be attributed (again) to the misusage of Web Services – a technological standard to represent APIs [which immediately implies another technological standard instructing how to consume these APIs). Again, a Resource (SAP) does not define the Business (Enterprise) Services – it's the Business that defines them. I reckon this might be controversial. But do bear with me for a little while.

If the definition of Services and their Semantics is external to Resources, it implies that the inter-resources Business Conversations happen in the SOA level, without the Resources' awareness. It is completely a non-intuitive thought for us – rational, cause-and-effect Westerns – but do think on the Butterfly Effect, which is so much identified with complex-systems and chaos. A Resource-Butterfly might be producing something in its day-to-day behavior. This "something" could be updating its own database, producing a log entry, raising an exception, or performing any action it has been programmed to do. On the Resource-Butterfly-level this is it. In its limited beholder's eyes the chain of cause-and-effect has reached its end. But, in the outer-resource world this event has a different meaning, a Business meaning. Something, external to the Resource, might interpret the Resource actions or non-actions (idleness) and decides to act upon them.

If the actions or the request for actions (Speech Acts) work with symbolic words (Services), Management & Control is simplified. A complex-system that has to govern its internal elements interactions (or non-interaction), will do it much easily if it could settle with symbols, not having to deal with the heterogeneity of the physical world. Therefore, Business Language with Services as words and policies as syntax fits here perfectly. Do note that the Business Language, Syntax and Business Conversations (aka business reactions) do not imply changes to the Resources language or view of the world. As there is a phase of interpretation from the Resource produced-events into Business Semantics, the Resource can keep on living its ignorant, happy life. Having said that, successful Enterprise SOA implementations should start and could start without a change to the existing applications.

So what are the elements that have to be present in this ultimate Management and Control System?

First we got the Eavesdropper. Its role is to listen to private conversations. Then we got the Interpreter, which translates the information into Business Symbols. Next comes the Controller, which based on the Services past, present and predicted future takes on-the-spot decisions. The Controller does not know the Resources behind the Services – its role is to operate the Services, "to do things with words".

Hence, SOA is actually a 360° life-cycle management & control system for… symbols.

As I mentioned in the previous post, Cisco is one of the rare industry leaders that got it right. Cisco Intelligent Information Network with the recently released Cisco AON (Application-Oriented Networking) understands that if they control the communication lines that all Resources are connected to, they can actually manage and control a complex-system, using symbolic manipulation. With Cisco AON there's no need to install or to designate a dedicated Eavesdropper for each Resource, as all private conversations run anyhow on the network Cisco is controlling. So Cisco is performing the Interpretation to the Business Language and let the administrator define how the Cisco AON Controller should react. The reaction would be an invocation of a Service. What's behind the Service? Cisco does not know and shouldn't care. It could be a human being receiving an email or an SMS notification telling him to run and install a new server for a specific application; or it could be SAP.

If Cisco will enable full life-cycle management of (only) Services, or symbols, or words inside its AON offering, it will become a real end-to-end solution for complex-systems management. IBM understands that already (see also: Why IBM bought DataPower?).

*The last post in the "SOA as a Management and Control System" is Organizing the Information of the World, in which it is explained why Google is the Real-World SOA.


Post a Comment