Saturday, September 24, 2005

A Hell of a (SOA) Day

Today it happened. I knew something is going on underneath, an effervescence of some kind, and yet I was surprised to get it in such a straight, blunt manner in my mail this morning. I am using Google Alerts on many topics, so every day I get a mail with all the publications Google has indexed the other day. One of the topics is SOA, and usually I get what I quickly dismiss as yet another PR made by the Gorillas or by startups regarding a fantastic SOA solution.

But today I had the most unexpected mail, with all these headlines: The SOA glass -- 35% full, 65% empty, an analysis of a recently published Cap Gemini report on SOA adoption; UDDI inadequate for SOA on the pitfalls of the OASIS UDDI standard and the emerging new standard by ebXML; and finally, the most unambiguous one, i-Technology Viewpoint: "SOA Sucks", which is less violent than its title suggests.

Well, is there something the Vendors forgot to tell us? Is this journey not ending in Heaven? No? Really? Are you sure? But we've paid 5 million to all these middleware vendors and at least a couple more to that system integrator, so now you're telling that SOA sucks*?!

*picture on the left is for those familiar with Enterprise Logging (The Recursive Enterprise, Part II)

Naturally, I'd like to refer you to all my previous posts; they all touch this exact issue. But I will add here two anecdotes that demonstrate beautifully this hell of a (SOA) day. The 1st one is related to a conversation I held with a CIO of a US entertainment corporation. I have tried to present him with my way of doing Enterprise SOA, and the minute he heard that word (SOA) he cut me off saying: "Just not that. I had enough of all these vendors coming over to my place, offering me to buy a dozen of middlewares that cover different aspects of this 'SOA', and guaranteeing that once all middlewares are in place, my problems will evaporate. I don't buy their story and I don't like this kind of projects". Of course, I convinced him that "mine's better" (ha, ha, but it is).

And here's the 2nd anecdote in the form of an e-mail I got from a very large System Integration house, while in my previous position as Orange Israel's Chief Architect. I haven't touched this mail (and I keep it as evidence ever since…).

"In order to implement SOA you need the following:
1. Products for SOA Platforms like AMber Point and Cape Clear
2. Products for SOA Management, like CA WSDM
3. Products for SOA Security, like Forum Systems XWALL
4. Products for Meta data management and EII (major part of SOA)
5. Products for SOA Architecture, like Popkin System Architect and ARIS
6. And last but not least - connectivity tools, like IWAY, ItemField and Anysoft, because at the end-points you need to connect to something…"

Would you buy this cat?


Post a Comment

Wednesday, September 21, 2005

The Enterprise Walk: Technology is not an issue

I have been recently summoned by a large financial institute, located somewhere in the world, to review the architecture of a core, mission-critical project they have outsourced to a major software development house.
While hearing and seeing the architects of the software house explaining the different solution's functional components and their interactions, I felt an ever growing uneasiness. These guys were no doubt domain experts in the Microsoft world; they have iterated all the right words: Whidbey, Yukon, Biztalk 2005, and (the inevitable) XML and Web Services. They also presented a layered architecture with UI on top, followed by business logic layer, connectivity layer and so on. So they talked the talk; but they didn't walk the (Enterprise) walk.

What these guys have missed altogether, is that they were not architecting a software application, but rather an Enterprise Solution. When a software house builds an Enterprise Solution (and we're talking millions of US$ Enterprise Solution), the Enterprise does not really care if the Solution will be developed using Biztalk or Whidbey, nor should it give any approving nod has it been J2EE or AJAX. Technology is not an issue in the architecture of an Enterprise Solution. Technology might be relevant for the software developers (i.e. the software house), but in itself, detached from a specific Enterprise Context, Technology does not matter! The software house architects confused Technology and software development principles with Enterprise Solution Architecture, and that's what was wrong.

So what's Enterprise Solution Architecture? There are two kinds of architectures: functional and non-functional. Both comply with the same principles of order, clarity, simplicity, ease of use and manageability.

The functional architecture layers the business processes, i.e. the Solution functionality, in a coherent, logical business-wise fashion. A good example for that would be Telemanagement Forum's eTOM (enhanced Telecom Operations Map), where Telecom Business Processes are layered across multiple dimensions of Enterprise Life-Cycle Stages (such as Fulfillment, Assurance and Billing) and Business Entities (such as Customer, Service, Resource and Supplier). So when an Enterprise Solution is designed, it's not a bad idea to layer the Solution's business processes on a larger Enterprise Functional Architecture to see where the Solution fits. Actually, most of the big Enterprise Software ISVs, such as Oracle, PeopleSoft, JD Edwards and Siebel ( :) ) have started to release versions of their products adapted to Vertical's functional architecture (PeopleSoft for communications; SAP for automobile, banking and so forth).
So bear in mind: even though no Enterprise is created equal, there are good chances that Enterprises from the same Vertical adhere to a similar functional architecture.

The non-functional architecture lays the foundations for the life-cycle management of an application or an Enterprise. The same architectural principles mentioned above (order, clarity, simplicity, manageability etc.) must apply here as well. But unlike the functional architecture which is determined by the business domain and the specific Enterprise needs, non-functional architecture is shared across all Verticals and Enterprises. It might even have a checklist!

I'll give here a short version of the non-functional architecture checklist. Bolded subjects indicate an impact on the Solutions' design and coding. When the subject font is in a regular style it's usually realized outside the Solution. Pay attention to the majority of bolded subjects: non-functional architecture is at the heart of any Enterprise Solution.

1. Change Management
Changes to the overall Solution's components (software code, infrastructures and applistructures - not just the software code!) must be managed in the following manner:
a. Impact Analysis: a critical factor in today's Enterprises. Who's who in the specific Enterprise Solution, in its entire life-cycle: requireemnts, codes, tests, applistructures, srevices, servers, storage, versions, stakeholders and so forth. Obviously - inter-relations are necessary: which element is related to what?
This information must be formally represented, automatically updated, and easily accessed.
b. Version Management - we all know that for software. Is it maintained for the other Solution's elements? It must be. Probably it can't be managed in the software version management package, but all the solution's components must be versioned in accordance with the solution's own version.

2. Dev/Test/Integration/Prod Environments
Normally, there will be multiple change tracks to a single Solution in different development stages. In my previous company we had more than hundred dev/test/integration environments representing the same set of core applications in different stages and versions.
What's expected from the Solution provider is an installation kit for scratch install and delta updates of those different environments.

3. Regression testing
The management of a set of tests validating that "the rest of the functionality" is still in good shape. Do note: it's the management of this set, not the set itself. Automatically updating the content of the set; presenting the content of the set; running the set; showing its historical run outcomes – all that is part of the regression testing management. Some of the best Enterprise Solutions I know had this regression testing management incorporated inside them!

4. Integration
A solution usually consists of different modules and/or applications and usually it interacts with other enterprise solutions. Information is, therefore, exchanged internally and externally.
Every Enterprise Solution must, therefore, contain an internal Sub-Solution that handles the integration (there's an exception, see later on). All the Solution components must use the integration sub-solution for information exchange – be it external or internal. They must access the integration sub-solution using the same request payload standard, and they should receive standardized reply payloads from the integration sub-solution. I'm insisting on the terminology sub-solution (rather than sub-component) so it would be clear that the sub-solution requires an identical compliance to the non-functional architecture subjects, as if it was a standalone solution.

5. Service Orientation
The Solution must be designed in a way that outsourcing its internal sub-solutions or functions would be possible (thus, enabling the reuse of existing components and reduction of vendors lock-in). If, for instance, there's an Enterprise Integration Architecture in place to which all Enterprise components adhere, the Solution must be able to use it instead of its own integration sub-solution.
Do note: there's absolutely no need to think "Web Services" whenever you see Service Oriented Architecture. As I mentioned earlier, technology is not an issue, and "Web Services" is just one out of many alternatives to technically realize SOA.

6. Scalability
This is the ability to easily scale-out. Note that scale-up is no longer acceptable, as real-time Enterprises are adopting the internet architecture to provide streamlining operations. Scale-out practically translates into cloning and partitioning. Different modules of the solution can have multiple, concurrently running instances (cloning), each taking care of a workload subset (partitioning). There must be a coordinator that either distributes the workload or performs effort de-duplication (I'll explain this better in another post).

7. Availability
This refers to the Solution's ability to assure non-stop operation, regardless of failures and utilization peaks. I sincerely believe and my experience proves that clusters are no longer relevant for nowadays mission-critical Enterprise Solutions. The failover time is far too long. Actually, Real-Time Enterprises cannot tolerate failovers. Designing Enterprise Solution for streamline operations with no failover is not a trivial mission, and vendors should thoroughly explain how they cover this.

8. Data Consistency
Most of today's and tomorrow's Solutions are exchanging asynchronous messages to get and set information (internally and externally). Guaranteeing every message reaches its destination, processed once and only once and properly returned to its requester is a tedious task. Failures can happen all along the integration chain; queues and databases can get corrupted; systems can fail in the middle of processing. It is no longer the days of the XA transactions with an automatic rollback – oh, no! These days are gone for good.

9. Monitoring (don't think of HP-Openview, please...)
Monitoring is confused with 3rd party monitoring frameworks. That's a great mistake. 3rd party frameworks do not understand the Solution! They might recognize the underlying hardware or applistructures of the Solution, but that's basically it. The Solution must manage its own well-being counters, as defined by the business stakeholders of the application. In the famous design review I described at the beginning of this post, the business stakeholder has defined 20 or 30 response-time thresholds for different use cases. Each of these response-time thresholds must have a counter that maintains threshold information and communicates it to the outside world. If you follow Microsoft's Dynamic Systems Initiative, you'd see that that has become a pillar in their "Design for Operations" architecture.

10. Operations
Given the complexity and distributed nature of today's Enterprise Solutions, it becomes highly desirable, if not a must, to have an Operators console as part of the Solution. Through this console the Operator can control the different elements of the solution, such as performing start/stop; examining current happenings (for instance, current queue state; number of processed files etc); investigating past states, alerts and logs, trends and so on – all from this central console.

11. Problem Resolution, Log & Audit, Debugging
If the Solution is aware of its self-state (through the employment of KPIs & KQIs counters) it would be possible to direct the support teams toward the potential source of a production problem. By providing consolidated logging architecture, i.e. all sub-solutions and sub-components are logging in a standard manner into a unified, designated location - problem resolution time is dramatically reduced.
Non-intrusive logging levels switching should be enabled across all sub-solutions and sub-components so debugging of a distributed, a-synchronous Solution would be easier.
combining Problem Resolution techniques with Centralized Operations capabilities is crucial for smooth operations. "Transferring knowledge" to operation teams is impractical and futile in the recursive nature of today's Enterprises; providing them with the adequate tools and UIs is simply a must.

(12. And if you insist: security.)

The beauty of the non-functional architecture is that it repeats itself across all possible dimensions, from the component level to the Enterprise level. As such, it is highly responsive to service orientation. Most of the non-functional architecture elements can be outsourced into Enterprise Services that will provide the required functionality for all components, sub-solutions and solutions across the entire Enterprise. In the future, Intelligent Enterprise Services would be used as part of the non-functional Enterprise Architecture, so Enterprise will become autonomic and self-managed. This will prepare the grounds for the rise of the machines.


Post a Comment

Wednesday, September 14, 2005

Enterprise Logging (The Recursive Enterprise, Part II)

This is the 2nd post on the subject of the Recursive Enterprise.

In the previous post I have described a fractalized, recursive Enterprise - a Babushka of Service inside a Service inside a Service. I terminated the post wondering how we could possibly pinpoint the location of a potential problem in such recursive-yet-distributed business process architecture and suggested that Enterprise Logging could be something worth looking at.

So just before discussing Enterprise Logging and weighing its pros & cons against the challenge we got, I suggest we take a tour on the current state of Logging in the Enterprise (which is totally different than Enterprise Logging…). I'll do it short and dry.

1. Most of the 3rd party elements, software packages, middlewares and appliances are logging their state & status.
2. On the other hand, most of the in-house, bespoke applications, are suffering from serious shortage in logs.
3. Not all logs are made equal. Actually, logs are annoyingly resistible to any standardization attempts (not that there are that many logging standards out there, but still there are some around). Logs differ in their payload format; their content; their semantics; their distribution channel and so forth.
4. Functional, business applications are almost always logging exceptions and/or functional misbehaviors. They do not log state (operational as well as functional), nor status (KPIs [Key Performance Indicator], KQIs [Key Quality Indicator]). The bitter truth is that most applications do not even bother to collect this information.
5. Logs are mostly ignored until a failure occurs.

I'd say this is enough to make one thing clear: in the current Enterprise Architecture there's no way we can methodically pinpoint a problem in an SOA/GRID Business Process; so along with SOA & GRID a serious, exhaustive retouch must take place, or otherwise those magnificent Services will become black holes.
(Note: actually, this logging state prevents also non-SOA Enterprises from properly handling production faults or any other issues related to distributed systems. Simply, as stated many times already, SOA is aggravating the situation. It's the difference between difficult but somehow possible to impossible).

So let's do a quick retouch:

All in-house, bespoke applications (and Enterprise Services fall well in this category), must report on specific categories, in a specific way, on a specific time. Differently put, we must have some kind of a Logging standard. A surprisingly good logging standard that covers format, content and situation is the IBM Common Base Event model, launched as part of IBM's autonomous computing initiative. IBM figured out (quickly?) that if they want to have a framework that understands what's going on so it can fix things, there's one thing they can no longer avoid – standardizing the logging of their entire stack. I strongly recommend reading (and using) this standard.

The Common Base Event model or alike, could be (and should be) introduced into newly built applications; but what about the legacy ones (The other thousand logs and a log formats, contents etc.)? They clearly should get translated as well, or the common base event model would be just another log format. Remember: languages were created to generate chaos; we aspire at minimizing chaos, hence the logical attempt to revert to a unified, pre-Babylonian language.
OK. We'll employ whatever technique to transform all logs into the common base event model. But then what?

When loaded into a data warehouse, this standardized, enterprise-wide log provides a magnificent panoramic view of the entire Enterprise landscape. Imagine that any programmer, operator or administrator can login into one system, through the same UI and see historical as well as real-time events from whatever is the object of interest: a service, an application, a router or a database: all in one. This infrastructure also lay the foundations for correlation, data mining, prediction, prognosis, capacity planning and many more enterprise architecture efforts. (you can have a look at Microsoft case study, documenting this Enterprise Logging System).

This warehouse, if based on [near] real-time events, can serve for manual pinpointing of [near] real-time problems in a complex, recursive, business process, given the knowledge of its topology. This knowledge may be partially documented or captured in the heads of some IT people, but essentially it does not exist. When Enterprises will start using dynamic service binding, topological knowledge would have to be automatically generated. This feature will be part of the new Enterprise Architecture Framework, where objects will not be created, configured, monitored & controlled by humans, but rather by the machines.


Post a Comment

Saturday, September 10, 2005

Moral OWL

Two weeks ago I had a couple of posts on the subject of ontologies, in which moral issues were raised. I followed the traces left by Udi, who commented these two posts and bumped into Udi's LiveJournal. In the journal, probably as a follow-up on his comments in this blog, I found a text which I think got some poetical essence in its structure and content. I've asked for Udi's permission to bring it here. I call this bit Moral OWL (OWL is an anagramic TLA for the W3C Semanic Web standard, aka Web Ontology Language).

Monday, August 22nd, 20057:07 pm

Practical (geek) religion

This is what I'm going to do:
- Build an OWL ontology, defining the rules for what a good behavior is.
- Add all of my behavior decisions to this ontology.
- Run the reasoner to see whether the behavior decisions are consistent with the rules of good behavior or not
- If so, use the brain reasoner to decide whether the behavior decision is good or not & if not, update the rules of good behavior
- Execute only behavior decisions that are found to be good
- Run forward chaining to see what behavior decisions can be deducted from the ontology, with the goal of maximizing good behavior actions (& eventually good events).

I thought about this Moral OWL for a while, and concluded that even though it appears to be good, it is actually a dangerous tool. Once in the VAT (in its simple Web 2.0 offering, or in its advanced, real-time thought processing capabilities) it’s a matter of time until we get scrutinized, tagged and even- filtered out! Whoever will do this, might use a proprietary Moral OWL, in which good is evil and evil is good.

So, indeed, technology in itself is not evil. And though I approve burning such a Moral OWL in the brains of the Intelligent Machines (classical for Asimov Robots; highly desireable for Agent Smiths), I fear its potentially dangerous application on mortals. The Mor(t)al OWL has to be put in safe.


By Blogger Udi h Bauman, at 8:09 PM  

Actually part of my motivation was indeed to eventually apply it to agent smith's, but my primary goal was to improve myself. As an engineer, I need to model my brain & its environment, in order to be able to fix it's working.

But, the distinction between applying Moral OWL on mortals & on machines is a temporary thing. As Kurzweil say, the answer to the question who has higher intelligence, humans or computers, will soon depend on how you define a human.

Besides, what's the difference between a Moral OWL & any other religion/ethical code? I just find this language more suiting my needs. Many logical positivists would have been sooooo happy to be alive today!!

Thanks for the very interesting comments & posts.


By Blogger Muli Koppel, at 9:53 PM  


I think you touched all these points which make the Moral OWL a frightening tool. Do you really want to model your brain into a rigid schema? Don't you want to have a nasty thought here and there? Do you really want to live in a police-state, controlled by the Moral OWL? I also don't get your Kurzweilian point. Assuming Robots will have higher intelligence than humans – would they, the Robots, want to live under the Moral OWL? ("Do Androids Dream of Electric Sheep?" [known as Blade Runner], by Philip K. Dick, touches this point and postulates that intelligent robots would go, just the same, against the Moral OWL)

Nevertheless, I like very much the originality of your idea.

Post a Comment

Thursday, September 08, 2005

When Mandelbrot met Lynch (The Recursive Enterprise, Part I)

In most of my previous posts I have described nowadays Enterprise IT as a holistic and organic system. From an IT macro-perspective, all business processes, business applications and the hardware that supports them are hubs and nodes in a multi-dimensional, highly complex and integrated graph, serving the business needs for information storage and retrieval.

What's amazing in this story is that IT macro-perspective is identical to an IT micro-perspective. If you take, for instance, a single business process, you'd discover that it is a mini-IT, spanning across multiple applications, hardware and (even) organizations, very similar to the multi-dimensional and complex graph of the entire IT. Differently put, "zooming in" from the Enterprise level to more internal levels simply shows similar pictures.

When macro perspective matches micro perspective we think of fractals. "Fractals graphically portray the notion of 'worlds within worlds' which has obsessed Western culture from its tenth-century beginnings."

Fractals are found in abundance in organic natural objects like clouds, mountains, coasts lines, river networks, leaves, dust, systems of blood vessels and so forth. Organic systems are mostly nonlinear and chaotic.

So Enterprise IT is a chaotic system and long-term prediction about the data center service level is irrealistic (IT is un-QoS-able), just like any attempt to have a long-term prediction of the weather is futile.

Enterprise SOA is aggravating the situation. By facilitating an easy assembly of Services into new virtual applications, Enterprise IT is becoming fractalized and Recursive more than ever before. And pinpointing problems in normal daily operations is… simply different.

Take, for example, a Business Process encapsulated in a Service. Customer Service Support complains that they get too many calls regarding the performance of this Service (that's the typical SOA "Systems are slow" fault, which, as you'll soon see, can drive you crazy). There's nothing suspicious in the IT ops monitoring console, so some brave IT people decide to have a look inside the problematic Service. And what they discover inside are... more Services; they peer into each of these – and the picture's the same: more Services! In three years or so, they'll get to the last functional Service just to reveal some more service calls, this time to the GRID management layer, asking for Computing Resources. The GRID management layer will invoke the Services of the virtual shared-distributed memory, of the virtual SAN and of the virtual machines. The VM layer will call the remote GRID service made available by IBM, to get drops of CPU leftovers from their facilities located on the dark side of the moon. And so forth.

Trying to pinpoint the exact place of a problem is like finding a needle in a hay stack. I can envision some high ranking IT Officer shouting, enraged: "Stop with these Services! Bring me R-E-A-L ___ing Objects!!!" and some will vehemently swear back: "We'll not rest till we find it, Sir!". But, just like the Roman soldiers who flooded the small hiding room of the Judian People Front (or was it the People's Front Of Judia), these IT guys will eventually come back, humiliated, whispering "We found this", raising a spoon up high. Indeed, a "Bigus Dickus" situation.

This situation can be avoided, with the help of the Log Lady and some of her friends (But I'll stick to logging in this post).

Following our fractal thinking, we would expect to find similar patterns in both the micro and the macro levels. So if a micro-object, like an Apache Web Server, is logging the well-being of its components, we'd expect to see similar logging patterns on the Enterprise level as well. Enterprise components are business processes, products and services. We'd like to see than, for a start, a detailed log of a business process.

But think about it for a second: what is the meaning of logging an SOA, distributed, asynchronous, Mini-IT, business process? Is it the collection of all logs produced by each of its sub-components? But these n sub-components are serving, not just one business process, but rather hundreds or thousands of them. So how would we know which log entry in each of the sub-components is related to our business process in question? That's a colossal correlation challenge.
Or should we conceive a new logging technique for the purpose of our new fractal, recursive, Enterprise?

In my next post, I will discuss the current state of Enterprise Logging and we'll see where it will get us.

"Stars, moons, and planets remind us of protons, neutrons, and electrons". the Log Lady.


Post a Comment

Tuesday, September 06, 2005

Penguins at the G.A.T.E.S of Troy

It was one of those annoying days, with all chasing down their potentialpart in a "system is slow" fault – really nothing we've never met before. Yet this fault happened in a very peculiar time: we have just installed the 1st shipment of our newly bought Linux-designated Suns v40 machines, and in a week or two we were supposed to receive the 2nd, significant, shipment for our Linux migration project.

When the fault has suddenly disappeared with no apparent changes to configurations or to utilization, we have decided to add more servers to the Terminal Servers Farm – just in case. And as we lacked spare windows-enabled servers, half of the 2nd Linux servers' shipment was relocated to the Terminal Servers Farm.

It was this intuitive relocation of resources that made Microsoft's brilliant survival plot clear to me. And after giving it a 2nd thought, I realized Survival was the wrong word; Taking-Over was a much suitable one. Microsoft strategy, in short, was to create the perfect Trojan penguin, which will be happily embraced by Enterprises. By embracing it and having "Linux inside", Enterprises will necessarily do two more things: abandon Unix and embrace Windows.

The Microsoft/Yankee Group anti-Linux campaigns were, therefore, constructed to fulfill this strategy. The campaigns touched uniquely and consistently one and only one subject: TCO. By comparing TCO, Microsoft has achieved its two goals: Linux has become a LEGITIMATE ENTERPRISE PLAYER, and its predator teeth were redirected toward Microsoft's real enemy – Unix vendors.

Let us try to reverse engineer the chess game played under our eyes.

Historically, Microsoft adopted the "X86 Intel architecture", which has been incapable of supporting Enterprise Business Applications. These applications were designed to cope with an ever growing business processing requirements by scaling up and adding more and more processors, memory cards and I/O channels into the same SMP machine. This is not the playground of a 2-4-way processors architecture.

In this game, Microsoft was helpless. But then Linux has changed things around, with the very good help of Java "write once, run many". Linux was the 1st non-niche Unix O/S to embrace commodity, X86 hardware. In other words, Linux used Windows-Hardware to run Unix applications. Which led Linux to the inevitable confrontation with Enterprise scalability requirements: business applications couldn't scale on Linux just as they couldn't scale on Windows. Same problem, buddy!

Surprisingly or not, Microsoft and Linux have found themselves on the same boat of those "fabulous OSes" with a tiny, little, insignificant inability to scale up. But, hey – the Internet taught us that a much better scaling architecture is out there: it's the scale out architecture, or the power of the masses. See Google, Amazon, Yahoo and all those names mentioned whenever the economies of scale are discussed. They are using thousands of worthless servers and if one fails - nothing happens. Why won't we apply this same model to Enterprise applications?

Well, scale-out is not just another option on the scaling-switch. A Business Application that was designed to scale-up will have to go through major architectural reengineering efforts in order to make it scale-out. To do so, ISVs must see a solid business case. So far, the major ISVs had no incentive to do so. They were making a great deal of money by selling software that run on huge Unix machines, and they had to maintain separate versions for each of the OSes/hardware combination offered by IBM, HP and Sun. As I wrote in an earlier post, testing costs dozens if not hundreds of millions of dollars a year. Maintaining two software architectures – a scale-up architecture for Unix boxes and a scale-out for Windows boxes - was clearly not cost-effective nor justifiable.

That's why Linux – facing the same problem with business applications scaling - has become so strategic to Microsoft. If Linux succeeds in gaining market traction on behalf of the big Unix vendors, then ISVs would have their scale-out business case. They will need to support a lesser number of OSes – Microsoft and Linux [only] - and they will be able to run their application on the same physical hardware.
That's a huge savings in functional and interoperability testing. And if the business application is coded in Java, it is the same software, the same hardware, just different OSes – a great advantage for the ISVs.

So how Microsoft will make more money through Linux?

A. Enterprises will run most of their applications on windows-hardware, so unlike the days of the UNIX regime, allocation and reallocation of hardware for Microsoft apps is possible. If it is possible – it will happen.

B. Microsoft will be able to play big time in the Enterprise game – a long lasting unfulfilled desire – as finally, Business applications will be able to run across (scale-out) several windows servers, providing the scalability and availability requirements for today's mission-critical Enterprises in a most cost-effective way.


By Blogger Dan Ciruli, at 1:52 AM  

Muli - I just discovered your blog, and I'm reading through many of the posts. I find your content to be fantastic. Thanks for all of the great information. I have a professional interest in distributed computing, and yours is the most informative blog I've found in this space.

By Blogger Muli Koppel, at 7:11 AM  


That's very generous of you. Thank you very much.

Post a Comment

Sunday, September 04, 2005

Brains In a (Google's) Vat

I remember those few occasions in which I dropped some sentences about our new virtual self and waited to see what the reactions would be. Sometimes I could go on with my thoughts; sometimes I saw the puzzlement on the face of my interlocutor.

The axiom I start with is communication. Without communication we don't exist. There's no I without a Thou (Martin Buber, "Ich und Du"), and there's no language without WE. Language is never a single-man creation, but rather a collective one (besides Zamenhof's Esperanto, but still it's a planned language inheriting from existing languages). Reality is defined by language – no one can break the barriers of language – the wall of Words – and experience the objective reality.

So communication is the Universe, and Us inside!.

Most of today's communication is digitized. Correspondence goes through e-mail; messages go through IM and SMS; Voice goes over IP, and so forth. So our "here and now" is digitized.

Yet the Internet, the digital world, went a step further – to Web 2.0, or the Social Internet. Now, every one of us is revealing more and more of his/her own "self": "my" thoughts in blogger, my links in, my photos in flickr, my music in, my books in reader2, my software in ….

All my mails, documents, chats, voice calls, links, photos, music preferences, books etc. are stored somewhere (most probably in Google's), along with the communication artifacts of other humans. If previously only "facts" were digitized (bills, licenses, government & commercial stuff), it is now our own self, through mails, doc, photos, blogs and so on. The Internet has become an alternate Universe, populated by humans and not just by official documents. Our consciousness is there too.

Big Brotherhood has, therefore, become easier than ever. Governmental Big Brotherhood in the form of the NSA; Commercial Big Brotherhood in the form of Google. "Covered" by Forrester Research early this year, the theme of Google's taking over Microsoft's role as the #1 hated company is getting stronger. And it's not just because Google's all over us – it's because they (like the NSA) are storing US; Brains in A Google's vat.

With a bit of an imagination we can envision a next generation web 2.0 startup that reconstructs Selves on the fly. Type a Self name and they'll reconstruct him/her in seconds. You will be able to interact with this Self and it might even show you its photos..., but I'm sure there will be a much nicer option available: they will call it Being John Malkovich. For just 10$, you'd be able to experience this Self, for real (and remeber, reality's just words...).

The next step will be to capture our thoughts, to digitize them in real-time, and to store them in Google' storage, so our nice "reconstruct self" startup could reconstruct a real-time Self. This is not as ridiculous as you would think ("proof" is that Philip K. Dick has written Minority Report 50 years ago, see left side image). Last year NASA announced that it has developed a computer program that comes close to reading thoughts not yet spoken, by analyzing nerve commands to the throat. It appears that when we think, we actually converse with ourselves.
Paradoxically (or not), the NASA experiment linked the thinking human into GOOGLE!

That's communication!

Privacy: I'd say – never worried about it. If they want to know anything about YOU, they would - with or without RSA "1024" bits encryption. So, get into the groove!

NASA scientists have begun to computerize human, silent reading using nerve signals in the throat that control speech. In preliminary experiments, NASA scientists found that small, button-sized sensors, stuck under the chin and on either side of the ‘Adam’s apple,’ could gather nerve signals, send them to a processor and then to a computer program that translates them into words. (Image Credit: NASA Ames Research Center, Dominic Hart)


Post a Comment