Monday, October 31, 2005

Got a big database? Stick an API on it.

Preferahbly as many as you can. Then you've got a really useful service that people can build into their applications. Here's one to look forward to: The BBC's programme catalogue.
It turns out there's a huge database that's been carefully tended by a gang of crack BBC librarians for decades. Nearly a million programmes are catalogued, with descriptions, contributor details and annotations drawn from a wonderfully detailed controlled vocabulary.
7 million rows of data going back to the 1930s. Wow.

Saturday, October 29, 2005

Singularity: A Next Generation OS

Here's a fascinating document from Microsoft Research detailing work on Singularity. It's an OS designed to support languages like Java and C# - so has been designed to support partitioned memory spaces, and to handle dependable code.
SIPs are the OS processes on Singularity. All code outside the kernel executes in a SIP. SIPs differ from conventional operating system processes in a number of ways:
  • SIPs are closed object spaces, not address spaces. Two Singularity processes cannot simultaneously access an object. Communications between processes transfers exclusive ownership of data.
  • SIPs are closed code spaces. A process cannot dynamically load or generate code.
  • SIPs do not rely on memory management hardware for isolation. Multiple SIPs can reside in a physical or virtual address space.
  • Communications between SIPs is through bidirectional, strongly typed, higher-order channels. A channel specifies its communications protocol as well as the values transferred, and both aspects are verified.
  • SIPs are inexpensive to create and communication between SIPs incurs low overhead. Low cost makes it practical to use SIPs as a fine-grain isolation and extension mechanism.
  • SIPs are created and terminated by the operating system, so that on termination, a SIP’s resources can be efficiently reclaimed.
  • SIPs executed independently, even to the extent of having different data layouts, run-time systems, and garbage collectors.
SIPs are not just used to encapsulate application extensions. Singularity uses a single mechanism for both protection and extensibility, instead of the conventional dual mechanisms of processes and dynamic code loading. As a consequence, Singularity needs only one error recovery model, one communication mechanism, one security policy, and one programming model, rather than the layers of partially redundant mechanisms and policies in current systems. A key experiment in Singularity is to construct an entire operating system using SIPs and demonstrate that the resulting system is more dependable than a conventional system.
Something to keep an eye on - this could be the type of approach needed to deliver modular OSes that run on hypervisors.

Thursday, October 27, 2005

Flock on Newsnight

Newsnight will be featuring a segment on the Web 2.0 movement and Flock in particular. While I'm a definite Web 2.0 sceptic, I think Flock is an interesting example of a next generation client application, providing a single (relatively) consistent user interface to a number of different applications that expose functionality via web services and other open APIs such as ATOM.

Lucite in Redmond tonight...

There'll be some shiny lucite Ship It awards on those desks in Redmond tonight, as according to head of Developer Tools Somasegar's blog Visual Studio 2005 and .NET Framework 2.0 have shipped. The code will be on MSDN later today. Microsoft's first real SOA oriented development platform. I wonder what people will build with it...

Hosted Microsoft

InformationWeek's article "Coming From Microsoft: 'Hosted Everything' " doesn't come as surprising after this year's PDC and a conversation Mary and I had with Orlando Ayala (MS VP Small and Medium Solutions and Partner Group) a couple of weeks ago... They're seriosuly looking at how they offer services to the SME market place. While tools like the Centro version of Windows server (think of it as Small Business Server for medium-sized businesses) and the Dynamics approach to business applications, there's a lot to be said for exposing platform components as hosted services - especially when you take into account the role of the Windows Workflow Foundation and Indigo - after all, most SMEs don't have full time IT staff, so how can you hope to have them use Axapta or even Biztalk? SOA for medium businesses is going to require hosted components - but components that can be remixed. Microsoft's history of a relationship with the channel and ISVs make me think that it will provide tools that can be adapted to work the way your business works, rather than the other way around...

Well, it is "meta" after all...

An interesting blog entry and associated Infoworld article from Jon Udell on The many meanings of metadata.
The solution is a complex recipe, but we can find many of the ingredients at work in the emerging discipline of SOA (service-oriented architecture). We use metadata to describe the interfaces to services and to define the policies that govern them. The messages exchanged among services carry metadata that interacts with those policies to enable dynamic behavior and that defines the contexts in which business transactions occur. The documents that are contained in those messages and that represent those transactions will themselves also be described by metadata.
It's a concept that's closely related to what I think of as one of the key tenets of the SOA philosophy: Interface-first Design. You can't have loosely-couple applications working together if you don't know how you're going to wire them together. You need to have a defined interface that can be used first as part of a test harness while you build and QA your service, and then as a contractual relationship between two businesses. The interface is the description of a services capabilities. It really doesn't matter what sites behind the interface, as long as the interface is stable.

Friday, October 21, 2005


Flock, a Firefox variant for working with blogs and social bookmarking services is now a vailable as a developer preview. We heard a lot of buzz about it when we met up with the folks from various blog search companies in September. Now to see if it meets the hype. I've configured it to work with my main two blogs on two different services. It's relatively easy - as long as your blog host supports ATOM, and you know the URI of the ATOM API for your blog. You can download a build here if you want to give it a spin.

Wednesday, October 19, 2005

Play on (safely)

Want to send a demo of your latest SOA stack to your users but don't want them to install a whole server just to look at what you're doing? You could give them a nice virtual machine image to try out, all loaded up with your code and a set of demo scripts. But do you know if they even have VMWare or Virtual PC installed? There's a quick answer in the shape of VMware Player:
VMware Player is free software that enables PC users to easily run any virtual machine on a Windows or Linux PC. VMware Player runs virtual machines created by VMware Workstation, GSX Server or ESX Server and also supports Microsoft virtual machines and Symantec LiveState Recovery disk formats.
It's a free download, so should make it easier to distribute demos as virtual machine images. So now all we need is a library of base images. VMware has thought of this too - so pop along to VMWare's Virtual Machine Center to find ready to run stacks from Novell, Redhat, BEA, IBM and Oracle - as well as VMWare's own Browser Appliance secure browsing tool. So, now for the big question, will Microsoft bite the bullet and do the same with Virtual PC? It looks like it has to.

More virtualisation companies come out of the woodwork

Today's discovery is Parallels, Inc., who contacted us after our Guardian piece on Intel's processor roadmap. Looking at their web site, it appears that they're going straight for the hypervisor market:
Parallels Enterprise Server, expected in mid-2006, will be a pure-hardware server virtualization and management solution that enables IT professionals to create several isolated independent virtual servers running Windows, Linux, OS/2, or FreeBSD on a single host physical server. Parallels Enterprise Server’s pure hardware implementation pools hardware resources and then dynamically allocates them to virtual servers as necessary, ensuring that each physical server is used to its maximum potential, and that each virtual server always has the resources it needs to operate efficiently.
It sounds like Parallels' Enterprise Server will be something that will work straight with Intel's VT and/or AMD's Pacifica, removing the need for a host OS - and giving a fairly hefty saving on system resources. So who will have the first hypervisor on the market? Parallels? Xen? VMware? Microsoft? It's an interesting race that's lining up now - with plenty of competition and scope for differentiation and innovation.

Sunday, October 09, 2005

It's the stack that matters...

I've been thinking about what will make a successful company in the Computing 5.0 world. There's a lot of hype about platforms, but I think that things go a lot deeper. To be successful in tomorrow's IT world companies must be able to point to a stack of components that include their software and hardware - along side open standards that help them work in a heterogeneous environment. If you think that you will control everything, then you're unlikely to get anywhere. Very few companies seem to understand the stack, and how it fits together. EMC seems to have been one of the first to seize on the stack model, understand where it could add value, and make the appropriate acquisitions and partnerships to move forward. By understanding that intelligent storage and virtualisation were key components of any future service architecture, EMC chose its stack components carefully. Leaving open interfaces at all levels, it's in a process of integrating Documentum and Legato into its storage hardware - making rules-based content and object management part of your infrastructure, not your applications. It doesn't matter what else you add to the stack - EMC has made sure that it's in an excellent position for the next decade. BEA is another company that understands the stack. Focusing on middleware as the new infrastructure, and providing the tools to integrate with a range of different middleware technologies, as well as orchestrating processes, BEA is facing a challenging couple of years. However, if it sticks to its guns and keeps its focus I suspect that it will become a big winner. Its acquisition of Plumtree makes a lot of sense here, as BEA seems to have realised that its strengths lie in building the next generation of network infrastructure. Other players that seem to be making a bid for Computing 5.0 include Google, SAS, Microsoft and Yahoo!. There are companies that have failed to understand how the stack will fit together. Oracle has made the mistake of buying its customer base twice over with recent acquisitions. While its Fusion model appears to offer a "plug and play" middleware approach, there appears to be too much of a SAP-style reliance on fixed business processes and specific ways of working. IBM is missing the synergy between its Websphere applications and its hardware, as well as Lotus' knowledge management tools. Its cosy relationship between its platform business and Global Services is a danger as it could lead to complacency. Others are focussing on niches that are too small. Apple may well end up dominating the living room, but you can only do so much with iTunes. Adobe may make some headway with its Acrobat LiveCycle, but it needs to work more closely with companies like EMC. Otherwise it'll become purely a developer of UI design tools. Of course there are plenty of small companies out there who will bring their expertise to the table. The open source world is rapidly becoming stack-based, and companies like MySQL and JBoss seem ready to work with stack support vendors to produce integrated platforms. The blended model that BEA is attempting with Eclipse and Apache looks to be an interesting alternative.

Pedal to the Metal

Andrew Ducker made some interesting comments about my last entry, where I suggested that new silicon technologies would significantly change the role of the operating system. One thing I failed to mention was that I was positing some future version of the .Net CLR that would focus purely on computation, and on network connectivity. This would significantly simplify the task of writing a "raw metal" CLR. Yes, Microsoft would have the problem of maintaining two different CLRs - a server/middleware version and a client version - however, the architecture that Microsoft is driving towards through the various Windows Foundations wouldn't preclude such a move. In fact it would make it easier - leaving Microsoft with a componentised CLR that could be tailored for specific tasks. After all, not everything needs a UI, and the .Net CLR is part of a mature application server platform. A componentised CLR would also allow Microsoft to synchronise releases of the Compact Framework with the full .Net system. The Windows hypervisor would be able to manage multiple "raw" CLRs effectively, allowing system managers to get much more bang for the buck without the OS overhead. And perhaps then Microsoft backends could take advantage of intriguing new technologies like Azul Systems network attached processing.

Friday, October 07, 2005

No more operating systems

Intel's VT and AMD's Pacifica are probably the most revolutionary technologies around. Until the arrival of chips with these technologies, Virtual Machine Monitor products like VMware, Xen and Virtual Server needed to be complex tools that managed and intercepted low level commands from the client operating systems, and marshalled them through the host OS for execution. It's a process that can be slow (and often is). By providing a silicon basis for virtualisation, the CPU companies have changed the role of the VMM from a complex piece of software that needs to marshal OS functionality at an application level to that of a partition loading, marshalling and management tool: a hypervisor. Hypervisor managed client operating systems will have access to all the resources of their memory partition - and In fact, with a well-written hypervisor do we need an OS at all? There's work going on to deliver OS-less operations. We've already seen Intel demonstrate task-specific partitions with thin operating layers. But what if the partition was running a version of an existing virtual machine, like Java or .Net? BEA announced at its recent Santa Clara BEA World that it was working on a version of its JRockit JVM that wasn't going to need an operating system. It would be controlled by a hypervisor (possibly Xen) and run in its own partition. This is an important move for the industry - it completely changes the dynamics of the relationship between operating systems vendors and everyone else. If your J2EE containers can run in their own partitions, using network storage, then there's really very little need for today's memory-hungry operating systems on service servers - just load up a JVM with your container and your service application. Using a hypervisor it'll be easy to add processing resource as required - and move the partition from compute resource to compute resource. It's easy to envision a world where the OS is layered and partitioned across a number of virtual machine spaces. In some there'll be hypervisor-managed JVMs, in some security monitors, in some there'll be task-specific OSes (perhaps a web server, perhaps a file store manager, perhaps a desktop OS), all communicating through shared memory using TCP/IP and XML. I suspect that we'll see Microsoft delivering a hypervisor-controlled version of the .NET CLR in a similar time frame to their post Vista OS.

Tuesday, October 04, 2005

SOA Governance

Managing the development of service oriented architectures will be very different from managing single application developments. For one thing, architects will need to coordinate the development fo services across the business, while juggling an alignment of their IT strategy with the overal business strategy. It's important to think about this issue - and there's an interesting paper in Microsoft's Architecture Journal from Richard Veryard and Philp Boxer on "Metropolis and SOA Governance".
Summary: In the service economy, we expect service-oriented systems to emerge that are increasingly large and complex, but that are also capable of behaviors that are increasingly differentiated. As we shall see, this is one of the key challenges of Service Oriented Architecture (SOA), and is discussed in this article.
It fits quite nicely with a piece I wrote for the Guardian back in March: The SimCity Way.
Managing software development in a large organisation can be tricky. IT directors need to juggle scarce resources while delivering applications and services that respond to business needs. It is a complex task, and one that often looks more like managing a portfolio of investments - or playing a particularly complicated game of SimCity.

Ning: a web-based social software UI development tool

So Marc Andreesen's 24 Hour Laundry has left stealth mode and launched the first web-based development tool for social applications: Ning. It's worth looking at the Ning Pivot to see just what people are building. Ning's definitely a Computing 5.0 application - using a web-based social application framework as a front end to a wide range of web services. Its Content Store is an interesting tool - an object database with strongly typed data (yes, folks, it's the Newton's Soup for the web!). An XML programming language or a custom version of PHP help build apps that can use SOAP to connect to remote services, as well as Ning-hosted services. Interestingly all application source code is visible to all developers, so you can build your app on top of someone else's code - code reuse the old fashioned way. Layout guidelines make sure that all Ning applications look similar, with a standard structure for each page. There's also a set of AJAX tools to make it easier to design complex user interfaces - and instructions for linking to Zend and Dreamweaver as developer . This is going to be interesting to play with. I've signed up for the Developer program, so will report back on how things look from the code side of the fence.

Monday, October 03, 2005

Introducing Computing 5.0

Seeing as I've introduced a new term, I really should define it. Computing 5.0 is the convergence of several trends that are set to cause massive changes to the way we do IT - for enterprises and for consumers. Computing 5.0 is tomorrow's world of a virtualised infrastructure overlaid with a dynamic flexible process-oriented service computing framework. Computing 5.0 is what I have described as a "phase change" in IT. It's the point where we abstract a large set of technologies (and their associated problems) and move from the current application-centric paradigm to one where we think in terms of workflow and process first, and component implementations second. Key elements of Computing 5.0 are:
  • Virtualised infrastructure
  • Loosely-coupled service architectures
  • Open standards
  • Process-driven middleware
  • Context-sensitive user interface
  • Strong identity management
  • Intelligent network storage
  • Workflow languages
There are associated concepts:
  • Interface-first design
  • Ethnography as a development tool
  • Web services
  • Open file formats
  • Rich Network Applications
  • "Long" transactions
  • Multi-modal user interface
  • Federated operations
  • Strategic architectures
What interests me is the speed at which these concepts and technologies are evolving. In future entries I'm going to go into these concepts in more detail, and also look at the development of Computing 5.0 companies.

There's no such thing as Web 2.0

Tim O'Reilly's forthcoming Web 2.0 conference, and his recent "What is Web 2.0?" essay has sparked a lot of debate about the future of web applications. I've also noticed some blog comment about a recent piece on Rich Internet Applications I wrote for the Guardian, which wondered why I didn't refer to Web 2.0 in the piece at all. Here's what I think. There's no such thing as Web 2.0. What people are calling Web 2.0 is actually the user interface layer for what I'm writing about in this blog. Let's calling it Computing 5.0. It's not a bad definition, especially if we look at the various phases of IT evolution: 1.0 was the mainframe 2.0 was the personal computer 3.0 was client-server 4.0 was the web and n-tier architectures 5.0 is tomorrow's world of virtualised infrastructure and loosely connected service architectures Tim O'Reilly describes Web 2.0 as:
Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an "architecture of participation," and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.
That's exactly what we need to provide the context-sensitive, process-driven UIs this next generation of applications is going to require. We're going to need tools to help us mix services into UIs that will add value to our lives and businesses. We need tools that can work when we're connected, and when we're disconnected from the network. We need AJAX, we need Flash, we need XAML and WinFX, we need Java Server Faces and Spring and Ruby On Rails, we need RSS and ATOM, we need HTTP/POX and we need REST. That's what the Web 2.0 folk are building. That's what they're giving us. They're building the MVC pattern for Computing 5.0. Now let's get that back end up and running.

Sunday, October 02, 2005

Project 3: Managing a service platform

Fourth (and final) of a series of posts from an unpublished book chapter written in 2002 Project 3: Managing a service platform The web service technologies used by the MAP can also be used to offer operators a distributed service management model. By defining a standard set of services for management, applications can offer both central and local management tools appropriate information. This approach can also be used by management packages, so that information is delivered in an appropriate fashion, ready for processing and delivering to end users. Different sets of services will be used by different roles, so that high-level management may only be delivered usage and financial information, while local technical staff will be delivered detailed operational statistics and data. Where web technologies are used, information can be gathered into digital dashboards, applications that collate and display a users key information streams. These can be delivered to desktop browsers, or embedded in email tools like Microsofts Outlook. Using MAP techniques, alerts and other event and workflow oriented messages can be delivered to mobile devices, with drill-down screens available on wireless PDAs. One advantage of this approach is that a local service platform at an operators headquarters can be used to aggregate information from several, globally distributed, service platforms. This is important if a MVNO (Mobile Virtual Network Operator) is offering a partner-hosted service platform that may or may not contain all the components offered in the operators main market. A dashboard solution can monitor both local partner-provided services and services built on top of the operators own common component architecture. Next steps: MAP as the universal aggregator A new role for the mobile operator The role of the mobile network operator is one that is likely to change dramatically with the shiftfrom voice to data services and the move to an experience-based customer relationship. While data services are often seen as business to business solutions, operators will have to offer them while maintaining a business to consumer focus. Currently digital consumers spend most of their online time using a small number of online brands and portals. What is surprising is that this pattern has persisted despite the open nature of the web browser, and the attempts of ISPs like Freeserve to capitalise on theirmillions of users with their own portals. Data-based mobile services and new devices will cause the current access model to change dramatically as new users come online; users who do not have the deep interaction-based relationship with a PC and web browser, instead using interactive TV and mobile devices. It is these new devices will be the target consumers of web services as they web services provide operators with an application-to-application relationship. With application-to-application web services, there will be a need for a new form of intermediary an organisation that can provide the interface between a consumer and the web services they want to use. Its users will needto access services provided by a wide range of service providers in the shape of both personal web services and corporate services. Any new aggregation service will need to provide a consistent user experience across a wide range of services. User experience will be a critical feature of this new intermediary, as many different web service aggregators will be vying for the consumers online custom. A Multi-Access Platform based on the Multi-Access Portal approach is ideally suited for acting as an aggregation hub for a wide variety of web services, whether they are based on Microsofts .NET MyServices or Suns ONE or even if they are existing HTML web applications. The XML integration approach used in the MAP is based around using the same standards as used by third party web services, allowing operators to use the MAP to deliver applications built around collections of web services. Sitting between the digital consumer and the web service providers, the MAP can aggregate services, providing users with one user experience and, where possible, one billing relationship. This is an opportunity for mobile operators to become the trusted intermediary for web service applications for a new generation of online consumers.

Saturday, October 01, 2005

Project 2 Output: The Multi-Access Portal

Third of a series of posts from an unpublished book chapter written in 2002

An architecture for user-centric web services across multiple devices

The Innovation Platform concept was an idea ahead of its time. The web services model was in its early days, and specifications still needed to be finalised. Despite that, just six months later, it was looking as though it was now possible to implement an Innovation Platform of some form, as the World Wide Consortium and other bodies had forged ahead on the development of XML technologies. SOAP, the Simple Object Access Protocol, and its companion WSDL (the Web Services Description Language) were now public standards. The client’s requirements, and current business mode; were analysed and a proposal for work produced. Initial discussions with the client resulted in the decision to begin work on designing a Multi-Access Portal. Using learning from the continuing work on the Innovation Platform, the Multi-Access Portal was intended to offer a means of linking services to consumers across multiple delivery channels, while providing the ability to develop a revenue stream in conjunction with third party service providers. The project took the Multi-Access Portal from an initial sketch to a set of working prototypes that delivered content to a range of devices, including wireless connected PDAs and WAP phones.


In any user centric design process, it is critical to understand the end user. In a project in another country, with a different language and with different cultural norms, it’s even more important to gain a deep understanding – if only to avoid falling into cultural traps. A key task for a user research team is an ethnographic study. While market data was widely available, this needed to be translated into real world activities. The team had to go out into the field and watch consumers. How did they use their mobile devices? What did they think of brands? Where did they spend their time? How did they bank? Observational data also needed to be supplemented with interviews and with focus groups. Meanwhile market studies can be used to develop a picture of the target market segments. Spending patterns can be used to flesh out these segments, and to gain an understanding of possible revenue streams and prospective services. When user research is combined with market studies, an accurate picture of end user groups can be drawn. This can allow key user needs to be captured, and selected for initial application scoping. Needs can be as simple as wanting to keep in touch with friends, and as complex as wanting to understand and manage a specific financial transaction.

Scenario development

With the target market segments and their key needs documented and agreed, the next step was to develop a set of scenarios. User profiles would be drawn up, and fictional characters created to fit these profiles. Each character would then be faced with a need defined during the user research, and a scenario drawn up showing how they would interact with a MAP while fulfilling that need. These could then be refined in the light of the available technologies. As 3G device capabilities remained unknown, some of the work had to remain speculative. Some scenarios were developed in the form of animations, in order to present a vision of the proposed service. These could sketch out user interface designs, while allowing interactions to be demonstrated. One of these scenarios showed a user researching and booking a holiday. Initially making a query on a desktop PC logged into the MAP, results and alerts were delivered throughout the day to his mobile device. More detailed information could be explored through a wireless PDA and an office PC used to put together a package that could then be shown to the whole family on an interactive TV system. Finally roaming mobile devices could be used to access location-based information when at the destination. Another involved a couple making a decision about a second mortgage. They were able to use a range of devices to communicate with their bank and financial advisers. A financial transaction could be started on one device, and confirmed on another, and information passed from one personal account to another. These scenarios showed several features of the proposed MAP. These included the use of third party services, workflow-based long transactions, and multiple device outputs. They also introduced one of the key problems for mobile application development: context. Context is a critical issue for application designers and for end users, and one often ignored. There is little or no point in duplicating web applications for mobile users, a lesson that can be learnt from the failure of WAP portals and sites. By understanding the interaction context – device, time, and location – an application can determine the appropriate user interface or content to deliver to an end user. While it remains extremely difficult to determine the exact interaction context remotely, existing personalisation and customisation techniques and technologies can be used to offer something that approximates to a context sensitive service.

Candidate architectures

Once a set of usage scenarios was complete, the technology team was able to start work on developing a technology strategy for the client. Instead of jumping into high level design, the approach chosen was to develop a set of candidate architectures that could be used as “straw men”, and tested against the requirements of the scenarios. This led to the development of two candidate architectures, one targeted at a content-based service, the other at an application-based service.

CMS and Application server architecture

One of the two candidate approaches to a MAP architecture treats a MAP as a primarily a content-based service. This entails using a content management system as the core of the service, handling multi-channel and multi-device output through an application server. In this architectural approach, a content management system is used to manage content assets, with an application server taking those assets, and formatting them appropriately, delivering device-targeted content. The application server also allows the MAP to include business logic in any service delivery components, integrating content and applications. It’s possible to use the CMS can be used to manage application components as well as content. UI for applications will be handled using familiar techniques, based around templates that tailor content for selected target devices. It’s important for operators using this architecture to regularly monitor server logs for requests from unknown devices and to then develop templates and application components to tailor content for these devices. The use of commercial application servers and content management tools will allow rapid roll out, with minimal systems integration. Applications will need to be developed as required, and scalability can be provided using well known procedures. By using a CMS solution, content can be generated by the MAP operator and its partners, and delivered into the CMS workflow process before being delivered to end users. One issue with a content driven approach is that there is reduced scope for user interactivity with MAP services, with its users acting as information consumers. Applications delivered over this service are likely to be limited to request and response services, with some alert functionality. A CMS solution does reduce development risk, as it is possible to implement it by customising off-the-shelf products, reducing initial time to market at the expense of increased development time for future applications and services. It should be noted that multi-channel delivery tools are being built into the latest generation of content management products.


  • Traditional n-tier web architecture.
  • Can be put together using existing products.
  • Reduced time to market for initial launch.
  • Content can be delivered using existing channels and procedures.
  • Content assets and code are kept separate.
  • Use of separate application and web servers adds to scalability and reliability.


  • Users become information consumers, not active application users.
  • Applications will need to be one-off developments.
  • Little prospect of code reuse.
  • Increased time to market for new application development.
  • Not all content management architectures support multi-channel delivery.
  • Architecture does not support context-based applications – separate applications will be required for each possible delivery channel.

Web services architecture

The second architectural approach was to consider a MAP as a host platform for a collection of interactive applications, hosted in an application management framework. Open standards are a key technology driver, and so application components should be implemented as web services. In this architecture the MAP is used to provide a framework of tools and services that can be used to deliver applications to target devices. As well as offering core services, it will host applications that can be developed internally, or by third parties and partners. By using a web services approach for application design and development, these applications will be composed of software components that can take advantage of other components in the framework. The core toolset is likely to contain some content management features, in order to manage and deliver content to client devices. However, application workflow and delivery templates can be separated from the web services components, easing application development and reducing time to market for the introduction of new services and applications. By using this application management approach interfaces and functionality can be tailored to the user’s context (e.g. working at PC, using a mobile device, watching interactive TV) without requiring separate applications for each context. This will give network and portal operators the opportunity to deliver appropriate services to specific devices – and with appropriate templates even within a specific device family, so that different variants of device browsers can be supported. The template approach used will allow the user experience to be managed separately from application components. An application management platform will allow design and UI standards for multiple devices to be enforced by the application framework, separating UI from both application components and workflow. Service components can be updated or changed without affecting design, and vice versa. The application framework used by a MAP can be delivered through a collection of web service APIs, which allow services to communicate through a loosely coupled messaging architecture. This will allow a MAP to be deployed across a number of server platforms, and to take advantage of developing XML-based enterprise application integration approaches to increase integration with partner services and applications. By using open standards such as WSDL (Web Service Description Language) and UDDI (Universal Description, Discovery and Integration) to publicly advertise APIs and UI standards, an application MAP can attract third party developers, and allow rapid development and roll out of new services in response to user demand. This approach will require logging and monitoring of service usage in order to examine user habits and actions. While offering an environment that meets the needs of a mobile service operator, the application management approach to a MAP will require significant development effort. There is a possibility that web service frameworks from Microsoft, IBM and the like will reduce this risk, but it must be kept in mind when planning implementation.


  • EAI messaging architecture.
  • Can be put together using a mix of existing products and bespoke development.
  • Applications and services can be developed in-house and with third parties.
  • Application workflow, display assets and service components are kept separate.
  • Use of distributed application architecture increases reliability and scalability.
  • Component-based services model increases code reusability.


  • An application management system will be a significant investment - both in terms of time and funds.
  • Loosely coupled messaging architectures are slower than tightly coupled n-tier systems.
  • Requires significant new build of integration components and systems.
  • Separation of workflow and application components can lead to inconsistencies if interfaces and APIs are not well defined.

The MAP as a web services platform for mobile users

By choosing an XML-based web services approach to the development of a MAP, an operator can approach service delivery in a very different manner from traditional portals. Instead of focusing purely on acting as an information provider, an operator can base their delivery model on applications, and their business model on the additional revenue streams that can be realised from these applications. Revenue can come from various routes. Some core services can be sold to third-party developers, an operator can take a percentage of every transaction carried out over their system, and users can subscribe to premium services. In practice, a MAP is likely to be used by a network operator to increase ARPU, and services will be targeted to either deliver transactional revenue, or to drive additional voice traffic. Developing a web services platform also means that an operator isn’t reliant on its own development staff. Instead, a Darwinian approach will allow third-parties to develop new services that may or may not survive in the market. Usage monitoring will allow a platform operator to determine the successful services, and to promote services that it sees as important. If a service isn’t successful, the architecture will allow it to be quickly removed and replaced with either a new version, or a completely different service.

Application design issues

The development of a web services based MAP raises several new issues for developers. While the web services model itself is now well understood (in the shape of Microsoft’s .NET platform, and the work done by IBM and others), some of the issues with integrating a distributed component architecture still need some thought. This include the question of how to develop applications that separate display from workflow from business logic, and that need to be both able to deal with user interaction context and long “transactions”. An early requirement will be the development of an XML-based workflow grammar. This will allow developers to describe the application workflow, and to define how component inputs and outputs will be managed. A key design decision will be the choice of core web services. These will need to be provided by the service platform, and run by the operator. While some will be stand alone services that will be used to boot-strap third-party developments, others will need to expose elements of the operator’s network infrastructure. These elements could include billing and location services. A possible list of core services is shown in the following table:
RegistrationA Registration service will be required to take a user’s personal details and store them for future reference; it will also need to record users’ usernames and passwords, and their service preferences. The Registration service should be available any time a user accesses the portal or any of its component services.
User databaseThis service will offer access to a relational store that will be used to store user information and to manage basic account information. This service will only handle user details and logon-on information – e.g. address, billing address, account name, user screen names, and service passwords.
Login and single sign-onThe sign-on service will define tools that can be used to handle user authentication as well as offering authorisation services that can be used by external service components. This service will offer a well documented API for use by third party applications, in order to give portal users a seamless online experience. In practice it may be possible to use a third party service like Microsoft’s Passport or the rival Liberty Alliance solution.
User preference storageThis service will be used to store and handle user preferences. By offering a central service it will make it easier to manage changes, and to distribute them to all relevant systems. While third-party service will be able to hold their own user preferences, this service will act as the master copy of the user data.
User customisation details storageIn order to deliver an appropriate customer experience, there will be a need to allow service customisation. This will allow users to pick and choose which service elements are displayed. Using this service, applications will be able to give users the tools to fine-tune their online experiences – reducing the need for complex personalisation systems.
PersonalisationFor more complex applications, there will need to be some form of personalisation. This is intended to monitor the habits and behaviours of its users in order to dynamically control the content delivered by the service. Using the personalisation service, application developers will be able to track user actions, and to tailor future responses appropriately. This will require the use of a storage service.
As any mobile service will be device orientated, there will need to be a means of indicating that a guest is using the service in the guise of a registered user, or that the user is operating the service for a friend. This can then be used to avoid creating false positives in the tracking and personalisation system.
Legacy system APIMobile operators will have existing service components they will want to offer as services to their users and to application developers. This will require the development of web service APIs to wrap these existing applications and components.
These legacy components are likely to be in the form of applications or data storage, such as geographical information systems. Existing enterprise application integration techniques can be used to ensure continuity of service in the event of changes to the legacy architecture and applications.
Multi-channel UI formattingAs the role of a MAP is to handle UI for an aggregation of web services, it will need to contain private components that will deliver information directly to devices. This services platform will contain tools that will handle multiple templates for multi-channel delivery.
A key feature will be a central identification service that will be used to identify devices based on the browser signature. This will require a regularly updated database of browser signatures for all supported devices, along with a mechanism for recording the signatures of un-identified browsers, in order to allow design and development of appropriate templates.
Information storageLike any online service, a MAP will need to store information, in order to manage persistence information for applications, or to store content that will be delivered to its users. A well-designed storage architecture will be a critical component of the service, and will need to be designed to use a common storage model for all applications. On option is to make sure that all information will be tagged with XML descriptors to provide effective metadata, and also to ensure that XML Schema are in lace that define all XML documents used on the service. An appropriate web services interface should be designed to give applications managed access to this storage.
In practice it’s likely that multiple storage systems will be implemented, in order to provide the most appropriate storage for applications and service components. Any storage system used will need to comply with the service architectural principles and API standards.
Syndication APIThere may be a requirement for a MAP to share content with partners and other portals on a revenue basis. This will require the development of a core service that will allow content to be shared to authenticated and authorised services. If the syndication system is to be billed on a per-bit basis, there will need to be tracking of all downloaded files, and possibly the implementation of a digital rights management solution where multimedia content is being used.
3rdparty information delivery/editingThird party content will need to be delivered to any content management system used to manage MAP content. In order to operate this function a service will be needed that will give third parties access to the CMS, and to allow content partners to be included in the CMS workflow. This functionality will need to be considered in the choice of any CMS tool or solution.
All alternative option will to be to expose CMS functionality through a partner extranet. Security will need to be a key consideration here, as any exposure of editing functionality to partners will increase the risk of intrusion or content subversion. Such an approach would also reduce the ability to automate publishing processes.
All content submitted to the service by partners and information providers will need to be in a specified format. This is likely to be in XML documents, defined by an XML Schema in such a manner as to allow automated content processing services to parse documents and store them appropriately.
Transactional payment systemA MAP service will require a complex payment system capable of dealing with multiple payment methods: among them direct phone billing, service account billing, and credit card payments. The payment engine will need to link to user profile information in a secure fashion in order to determine payment preferences and offer appropriate choices depending on the application context. An appropriate web service will expose this functionality to applications.
Before developing a transactional payment system, decisions on the accepted payment methods and associated APIs will need to be taken.
Location service APIAs MAP applications may not necessarily be offered by a network provider, location information for any location dependent service will need to be provided by to application by a location service. By working with information stored in user profiles a location API will allow applications to either take information from handset GPS systems, network location systems, or user input. This standard interface should be made available to all applications, and included in template definitions for location-based applications and functions.
Advertising engineA possible revenue generating function of a MAP will be to deliver targeted and appropriate advertising to its users. There will be a requirement for any advertising system to use personalisation information and user preferences to determine appropriate advertising content. Location services may also offer a method of targeting adverts for mobile users. If relationships with other network operators are developed this engine may be required to deliver content to network subscribers who are not service subscribers.

Billing system APIAs a MAP is intended to be a transactional system, with the ability to charge users for content and services and goods, there will be a need for some form of billing system for account customers. This type of functionality could also form the basis of any micro-payment solutions.
An API or a series of APIs will be required, in order to allow applications access to the service billing solution, as well as from the service to partner billing solutions – especially if services are being offered to users of external 3G networks on a negotiated basis with the network operators.

Prototyping to enhance user experiences

A key development approach for a MAP is the use of agile development methodologies (such as eXtreme Programming). These are suitable for component architectures, as they concentrate on testing and on regular delivery cycles. This approach can be linked with regular prototypes, in order to allow user testing for interfaces. One possible prototyping environment is built around wireless PDAs, using 802.11b connections. These can be throttled to give an appropriate connectivity experience, and can then be used to trial UI designs. An interesting early finding from this project was that users were far more inclined to use fingers on touch screens than styluses. Early UI designs with small stylus buttons were then reworked to be “finger-friendly”. One useful feature of the 802.11b prototype was its portability – a UI prototype could be served from a laptop running a local web server, with wireless connectivity to Compaq iPAQs. This meant that impromptu demonstrations could be given anywhere, without relying on network connectivity or software device simulators.

Project 1 Output: The Innovation Platform

Second of a series of posts from an unpublished book chapter written in 2002 Lifestyle management services
One of the key features of the client’s existing service offering was a suite of lifestyle management services. These were services that were intended to help individuals manage busy lives. The services used included shopping advice services, dating services, and a concierge service. A mix of web, SMS, IVRS and call centre staff were used to offer their services. User research showed that this approach was popular with the target market segments. Further research gave pointers to a basket of lifestyle management services that could form the basis of a set of applications. These were prioritised, and then presented to focus groups to further refine the service profile. Financial and travel services were an important component of the proposed service package, along with location-based services and tools. There was also demand for communication based services, such as instant messaging, and location-based buddy lists which would alert users if they were in the same cell as friends. Multi-platform delivery requirements
One of they main issues concerning the client was the explosion in device types. Where they had begun as a GSM operator, specialising in voice and IVRS services, they were now already offering SMS services (including location broadcast messaging), running a web portal and were starting to deal with WAP – and the incompatibilities between their providers microbrowsers. They were beginning to plan a GPRS network, and were also considering bidding for a 3G licence. It was clear to them, looking further east to Japan and iMode, that the next generation of phones would add many more user interfaces and many more form factors. Wireless connected PDAs would make things more complex still. Any solution they adopted would have to be able to be adaptable enough to support these technologies and platforms, and the changes that would occur with shifts in fashion as the new season’s phones rolled out. The ability to cope with rapid change was a key driver for any platform development. A conceptual architecture for rapid service delivery
A solution to these problems was put together, and quickly became known as the Innovation Platform. Designed to be a component-based application architecture, it could best be thought of as a bus structure that would allow new applications to be plugged into the service as required. With well defined APIs, new versions could be swapped out, and unsuccessful services removed. User interfaces and other core services are functions of the bus. While conceptually a messaging architecture, it could be developed using other component technologies, such as JavaBeans or COM – or frameworks like CORBA. All that would be required to link components would be a will defined protocol that could handle serialised data and manage events, as well as call methods and return results. Fixed APIs
Fixed APIs are a key to delivering an Innovation Platform, as they guarantee its “plug and play” operation. Instead of changing APIs from version to version, applications and components will need to be designed to have fixed APIs that will not change. Any versioning will need to be side-by-side, so that services that use component A version 1 will still be able to operate, despite the release of version 2 and the services that use it. The operator will need to fully document its public APIs, and component and service developers will be contractually obliged to do the same. Service descriptions
As an Innovation Platform would offer a plug and play software bus for applications and components, it would need to offer a set of tools that would document APIs and service locations. This would be shared with partners and partner developers, to help them develop new applications. Service descriptions would also be used to generate an application directory for end-users, in addition to any navigation model. Service managed user interfaces
Any service must offer a single user interface and navigation metaphor to its users. One of the problems with WAP portals is that when a user passes from the portal site to hosted services, the look and feel changes. An Innovation Platform, acting as a host for applications and service components, would be able to avoid this by defining display rules and templates. Applications would deliver content to templates, rather than directly to user devices – allowing the operator to control look and feel, as well as enforcing their brand values and brand experience. Open interfaces to operator services
Another important feature of the Innovation Platform was to be a set of APIs that gave third parties access to operator services. These would include network services, such as SMS and location information, as well as access to billing and account services.

A web services based approach to customer-centric mobile portal architectures

First of a series of posts from an unpublished book chapter written in 2002 Introduction: what are web services and project overviews This chapter looks at work done during 2000 and 2001 on defining an architectural approach that would enable network operators and third-party service providers to use a standard platform for the development, delivery and management of services for use with 2.5G and 3G mobile devices. Three projects are covered, one for a Far Eastern GSM operator, one for a European portal company and one for a European GSM operator. The work done with these clients led to the development of a web services based architectural approach intended to enable organizations rapidly roll out and evaluate customer-centric services. Web services are a relatively recent innovation, one that has come from the extension of XML-based enterprise application integration techniques to both distributed application development and component software. With EAI, we can use Internet technologies to link disparate applications across a global organisation – even between organisations involved in a formal business-to-business partnership. Tools like Microsoft’s BizTalk, and iPlanet’s Integration Server (based on the industry-proven Forte Fusion/Conductor combination) can be used to link applications through a corporate integration server, often based on a messaging framework to pass information between applications. A key technology in EAI is XML. The eXtensible Mark-up Language has been designed to allow application and OS independent exchange of information. By using XML to create application specific documents, containing inputs and outputs, information can be exchanged asynchronously – though near synchronous behaviour can be achieved. Businesses and applications wishing to exchange information via XML must develop a common business language that can be implemented as XML tags. Message bus tools like IBM’s MQ Series can be used to deliver these messages, handling connections and communications protocols. Microsoft’s EAI solution, BizTalk is an example of an XML-enabled message bus. XML messages are routed through a BizTalk server, delivering information to applications. A workflow design tool is used to orchestrate these messages, allowing the creation of business-centric translation rules. Translation services are an important feature of EAI solutions, as they allow messages from one organisation to be delivered to systems in another, by using an agreed “business grammar” to define message formats and content. Recent work by the World Wide Web Consortium has extended XML considerably, increasing its suitability for use in EAI. The recent ratification of XML Schema has given developers a key tool for creating complex XML data exchanges. Instead of merely describing the structure of a document, like an XML DTD, an XML Schema is a powerful means of describing a document’s content – by allowing tag descriptions to both define and prescribe the data they will contain. XML has also given us another piece of the web services jigsaw puzzle with SOAP, the Simple Object Access Protocol. SOAP allows us to offer a subset of the traditional RPC functionality. SOAP allows developers to expose object methods and parameters through XML interfaces. Short XML messages transmitted over HTTP (as well as HTTPS and SMTP) can be used to deliver calls to a SOAP object, with other XML messages returning the results. This is an important technology, as it allows a COM Windows application to consume objects in J2EE application servers. Extensions such as to SOAP such as DIME (Direct Internet Message Encapsulation) allow a SOAP message to contain binary data without requiring encoding. Web services need technologies like SOAP and its associated Web Services Description Language (WSDL), as these allow development environments to access information about available functions. A web services aware IDE should be able to treat a remote service as a software component that can be used in your applications – and manage the syntax of your calls appropriately. At a higher level UDDI (the Repository for Universal Description, Discovery and Integration) acts as a tool for advertising WSDL service descriptions. Putting together XML Schema, SOAP, WSDL and UDDI gives us everything we need to implement web services. Instead of organisations offering full featured web applications, they will be able to deliver appropriate functionality to users, charging per use or on a subscription basis. As web services will be true software components, developers will be able to update and improve the services they offer, and as long as they do not change interfaces, users (both applications and end users) won’t know any that there’s been any change. Web services are an attractive technology, and one suited for use in the mobile world, where customers are attracted to the best deal and where service providers are still learning what appropriate services for 2.5 and 3G subscribers will be. Project 1: Far Eastern GSM operator The first project involved working with a Far Eastern GSM operator to determine its approach to delivering services over its proposed 2.5G network. The operator was already linking IVRS, WAP and web services, and but needed to find an approach that would reduce its time to market for new services, as well as giving it the flexibility to deliver content to the next generation of devices, as well as wireless connected PDAs. Project 2: European ISP and Portal operator This project involved working with a European ISP and Portal operator. Owned by a national wireline telecoms provider, the ISP wanted to extend its existing portal service, with the aim of supporting a sister company’s 3G operations. The existing portal was offering some WAP services, but was mainly targeted at PC users, and the ISPs own dial-up customers. Project 3: European GSM operator This project involved working with a European GSM operator; that was looking to roll out further services around the world. In order to support these new operations, which would also extend its existing range of services; it needed to develop a management platform that would be suitable for use with its own service operations team, and with partners around the world. User research and user experience defining projects and solutions Any customer-focused service, whether it’s on the web or delivered through mobile devices, needs to be focused on the needs of the end user. Too many projects have failed and services scrapped because they have been technology or business led, delivering what an organisation thought the users needed, rather than what they actually needed and wanted. As a result, mobile service development needs to start with user research. While market research and business plans are still important, any results need to be validated with a sample of users. Techniques used will include ethnography, surveys, focus groups and user interviews. User research isn’t just for the early phases of a project. At every step through the project results and outputs need to be checked against user response. This is especially important during the development phases. Regular prototypes of applications and services need to be tested – even if they are just wire frames or user interface mock-ups. Dealing with customer churn and changing markets
Early mobile telecommunications markets were biased towards the operator, locking consumers into long term contracts and inflexible numbering schemes. The implementation of consumer-centric regulatory schemes put an end to these practices, and instead introduced more flexible contract terms, as well as consumer friendly schemes such as number portability. In the first client’s home market this process, along with the arrival of “pay-as-you-go” services lead to massive churn, and heavy price competition. This had resulted in low costs for voice calls, and an emphasis on value added services as a differentiator between operators. There was also a strong emphasis on brand values and brand identification, with the client focussing strongly on the young adult market. One interesting side effect of the regulatory environment was the development of a saturated market, with 60% plus device penetration. The second client’s market was younger, but had a much higher reliance on “pay as you go” customers. Mobile handsets were available from virtually any corner shop, in bubble wrapped packaging. Whichever company brought out the latest model phones would quickly find their packages at the top of the sales charts. A recently privatised incumbent national telco was finding its feet in a newly competitive market, and it was beginning to take advantage of its broadband assets and ownership of the main cable TV operator. In both these cases there was a strong need for a solution that would allow the operator to react to rapid changes in the market. This need was made even more critical, as the changes were likely to be customer led – and maintaining customer bases would be important to future revenue streams.

SOA before the revolution

Over the winter of 2000-2001 I was consulting at a Portugeuse mobile operator, helping them define their service platform for their next generation launch. They were a new entrant into the market, and our team came up with a transactional portal for them.

However this wasn't like any other portal - it was a collection of hosted and third party web service connected components, managed by a directory, linked by a business process XML workflow language, and with an XML presentation layer to deliver a contextual UI to any of a range of devices - from desktop PCs to phones to interactive TV.

It was everything that we'd call an SOA today.

The client never built the system, opting for the tried and tested walled garden approach. I wrote up the work we'd done as a chapter for a book on mobile commerce. Which was never published. So to help you all see where I come from I'll be posting the piece here.

A New IT World Ahoy

The world of enterprise IT is changing - from server to desktop and everywhere in between.

You've heard the buzzwords: service architectures, loosely coupled applications, virtualisation, serialisation, network storage, network processing. These are just some of the ways in which the way we do IT are changing - a change that will have as much effect on the way businesses work as the arrival of the desktop PC.

I'm Simon Bisson, a technology journalist and consultant who's been writing about these issues for a long time now - and with the real world experience of designing and building large-scale loosely coupled systems: from telecom research labs to the early days of national ISPs, and from photo hosting platforms to telecom service platforms. It's been a long road from the early '90s to today - and a lot is happening around the world.

A lot of what I've been writing recently in my pieces for The Guardian and elsewhere has been about what I'm calling a "phase change in Enterprise IT". It's that point where all these changes are coming together to change the way we design and deliver IT. It's a place where hardware is abstract, where the OS and storage are virtual, where UI becomes contextual, and where process and service mean much more than applications.

I've decided to produce a blog focused on these developments here - my other, more general blog is Technology, Books And Other Neat Stuff.