Monday, February 13, 2006

Moving bloghost

I'm moving this blog to Wordpress.com.

Future entries will be at itphasechange.wordpress.com. I've already made a copy of all the content there - so please update bookmarks and RSS feeds.

The new RSS feed is at: http://itphasechange.wordpress.com/feed/.

I'll be tidying up the new site over the next week or so. See you there!

Wednesday, January 11, 2006

The Hypervisor Wars

I've realised I've mentioned the idea of the hypervisor wars without explaining what I mean by it. The underlying virtualisation technologies used in Intel's VT and AMD's Pacifica curently only allow a single VM Manager to run. This means that the VMM (the hypervisor) installed has an incredible amount of power - it controls what runs and how it runs. Install yours first, and the machine is yours - especially if you lock your hypervisor into TPM or similar security mechanisms. So what would the hypervisor wars mean? Firstly an end to the open systems model that's been at the heart of enterprise IT for the last 25 years. If Microsoft and VMware fell out, VMware could reduce the priority of Windows partitions. Other hypervisors might have licensing conditions that make it impossible to run non-free OSes as clients. You could end up with a situation where each OS installation would attempt to insinuate its own hypervisor onto the system partition. Security partition developers may find that they are only able to code for one set of hypervisor APIs - locking end users into a closed platform. The end state? Co-opetition breaks down, the industry becomes enclaves built around hypervisor impementations, and the end user finds that they're unable to benefit from the possibilities of an open hypervisor architecture. Can we avoid the hypervisor wars? Optimistically I think we can. There are pre-requisites. We need an agreed hypervisor integration architecture, and we need it quickly. Let VMM developers compete on ease of operation and management, not on who controls your PC. Technorati Tags:

How long before there's an Apple Hypervisor?

One thing to note about the new Apple Intel machines is that the Yonah chipset supports VT. With Apple saying that they'll let Windows run on their hardware, the question is - will they let a third-party hypervisor run? I suspect not - especially if they are using TPM in secure startup mode. Of course, they'll first need to enable VT in whatever BIOS they're using... So will Apple produce its own hypervisor, or will it badge a third-party tool? My personal suspicion is that Apple doesn't have the skills to write it's own hypervisor (there are only a limited number of people with the deep combination of hardware internals and OS knowledge required, and they're mainly at Microsoft and VMware) that they'll announce a partnership with VMware at the WWDC. Unless Apple's been hiring the Xen dev team on the sly... Apple will quickly need to gain the high ground in managing virtualisation on their platform - as they'll need to maintain contol of OS X running as a VM. Otherwise, will Apple be the first casualty of the hypervisor wars? Technorati Tags: , , ,

Monday, January 09, 2006

Opening up the Lightroom

Adobe's new Lightroom is, as they say, the bee's knees.

Fast, responsive and ideal for working with RAW images, it takes the best of CameraRAW and Adobe Bridge and turns them into a one stop shop for basic image manipulation and comparison. Best thought of as a digital lightbox, its adaptive UI makes it easy to hide the elements you don't need and just concentrate on the images. An image workflow tool, it helps you manage how you work with images - and how you capture them.
Lightroom Beta lets you view, zoom in, and compare photographs quickly and easily. Precise, photography-specific adjustments allow you to fine tune your images while maintaining the highest level of image quality from capture through output. And best of all, it runs on most commonly used computers, even notebook computers used on location. Initially available as a beta for Macintosh, Lightroom will later support both the Windows and Macintosh platforms.
Which means it runs quite happily on my aging G4 PowerBook (unlike the G5 optimised Aperture)

That's not say that Lightroom is competition for Aperture.

This is more a first look at how Adobe is rethinking what people are doing with the Photoshop toolset, and putting together the beginnings of a script-controlled service framework for its next generation of imaging applications. It's a model that fits in nicely with a conversation I had recently with Adobe's CEO Bruce Chizen (which should be in the next issue of PC Plus), where we talked about Adobe's strategic direction after the Macromedia acquisition. I'll leave the conversation to the article - but one thing, I think Adobe are one of the companies that bear watching over the next 3 to 5 years.

(I'm glad I can talk about it now - I saw it in December, and was very impressed at the time - unfortunately I'd had to sign an NDA.)

Betanews notes that there won't be a Windows version until Vista hits the market. I'm not surprised. I strongly suspect that Microsoft is working with Adobe to make Lightroom one of the apps that will be demoed at the Vista launch. The UI of the version that Adobe demoed back in December would work very well on WinFX - it's ideal for XAML. Microsoft has had Adobe on stage showing proof-of-concept XAML applications in the past, so having it showing shipping code at the launch would make a lot of sense...

Cross posted to Technology, Books and Other Neat Stuff

Technorati Tags: ,

Thursday, January 05, 2006

Manage Your VMs

Here's a useful post from the always interesting Scott Hanselman, linking to hints and tips on how to use VMs more effectively.
There's a number of generally recommended tips if you're running a VM, either in VMWare or VirtualPC, the most important one being: run it on a hard drive spindle that is different than your system disk .
It's good advice. I'll be moving my set of VMs to a seperate SATA drive on my main PC. However, sticking them in a fast USB 2.0 drive looks to be a sensible approach as well.

An interesting thought occurs - will we see hardware designed for hypervisors and hardware virtualisation coming with many hard disks? Or will we see a caching layer used, passing operating systems into partitioned cache RAM?


Technorati Tags: ,

Saturday, December 17, 2005

Opent the APIs and they will come?

It's a truism of the service world that open APIs mean more developers working with your public services. Google is a good example of this, and it's doing it again by opening up its talk service with an interesting set of functions as described on TechCrunch . Libjingle looks very interesting (and probably something for me to think about with my Server Management messaging editor hat on). Quickly looking at Google's announcement we see a collection of tools that could make it a lot easier to build collaboration applications:

We are releasing this source code as part of our ongoing commitment to promoting consumer choice and interoperability in Internet-based real-time-communications. The Google source code is made available under a Berkeley-style license, which means you are free to incorporate it into commercial and non-commercial software and distribute it.

In addition to enabling interoperability with Google Talk, there are several general purpose components in the library such as the P2P stack which can be used to build a variety of communication and collaboration applications. We are eager to see the many innovative applications the community will build with this technology.

Below is a summary of the individual components of the library. You can use any or all of these components.

  • base - low-level portable utility functions.
  • p2p - The p2p stack, including base p2p functionality and client hooks into XMPP.
  • session - Phone call signaling.
  • third_party - Non-Google components required for some functionality.
  • xmllite - XML parser.
  • xmpp - XMPP engine.
Looks interesting. The related Google Talkabout blog has just gone on to my blogroll...

Tuesday, December 13, 2005

Platforms and stacks

I've written a bit on the idea of stacks as a key component of next generation computing environments, but they're only part of the story. Once you've implemented a stack, and are using it to deliver services, you need to group the services together, and add a management layer to show usage and predict future operational needs. The resulting architecture can best be described as a platform - as it's the foundation for a range of SOA processes. Amazon has been slowly turning itself into a platform, and they've just turned their search engine into a public managed platform. Alexa's been around a long while, but it's turning itself into a set of services - managed (and priced) using a utility computing model. An interesting move, from an SOA pioneer.

Sunday, December 04, 2005

Sun becomes Wilkinson Sword

While I noodle away at my thoughts on licensing for the next generation of IT systems, Sun is being suprisingly innovative. Not only are they moving their software sales model to support services, but they're also using the same model to get developers onto their hardware. In the US you can get a shiny new 64-bit Opteron powered Sun Ultra 20 Workstation for only $30 a month (payable a year in advance). Sign up for 3 years support for Sun's OS and dev tools, and the hardware comes free. An interesting approach It'll also be interesting to see how the rest of the Java tools world responds. Will BEA start giving away its tools, to drive people to the AquaLogic and WebLogic platforms? Time will tell.

Friday, November 25, 2005

Reusing interfaces

Richard Veryard has some interesting things to say about reuse in the SOA world. It's a problem I've been thinking about, too - but from a very different direction. Reuse isn't just about using the same piece of code again and again across your business' many applications. It's also about ripping and replacing code without affecting all the applications that use it. In the past reuse has been avoided as this element of the philosophy could have undue effects on key business operations... SOA changes the status quo. The key seems to be that effective SOA demands what I think of as "interface first" design. Often thought of as "design by contract", this approach fixes the properties, methods and events offered by a service. What it doesn't do is define the code that delivers the service elements. If an application only needs to be aware of a service's interfaces, then an application instance can be switch from using service V1.0 to V1.1 without affecting operation, as long as V1.1 offers the same service interfaces as V1.0. A major change, V 2.0 could still offer V 1.0 interfaces at the old service URI, with new functions at an alternative service URI. Rip and replace without affecting consuming applications. A definite benefit of the SOA world.

Thursday, November 24, 2005

Avalanche in (limited) operation

It appears from this blog entry that Microsoft are starting using their Avalanche P2P distribution network in anger... With the shift to two year release cycles for stack components, and monthly CTPs, I suspect it won't be long before this becomes common practice for all betas and for MSDN.