However, Twitter has always been a platform which others have connected to and used to diversify the Twitterverse. An example only yesterday was Ben Marsh’s map of snowfall in the UK where data inputted by Twitter users populated the map.
The ease with which you can collect and manipulate this data is due to Twitter’s messaging mechanism – along with the active involvement of the community. Whilst this will no doubt continue in one off situations, the real power will come from automating this data collection. What if we could use our mobile phones to collect data and provide a real time weather map? That may be a bit blue sky, but already we have GPS built into mobile phones and these could easily be used with Twitter to create some interesting applications – albeit they could be a privacy nightmare. Distributing this type of data historically required proprietary protocols which just makes development harder and slows down innovation. The Twitter (or XMPP) protocol provides a standard methodology for the distribution and collection of data in the same way RSS feeds provided a simple way to distribute website content.
This definitely does not solve Twitter’s monetisation question (unless they charge for access? surely not?) and in fact creates many more. The XMPP protocol which underlies all this will need to scale to cope with this level of load? Today, Twitter’s stability has been fine – historically, it did not scale well at all. So can it continue to scale to cope with this type of usage?
The second is the Twitter interface. It is all very well having this firehose of data displayed on one web page, but when you have all this data flying through, the user really does not want or need to see all of it. Applications like Tweetdeck and Twhirl have come a long way in helping deal with this. Tweetdeck in particular has moved away from this single river of data to a more group methodology allowing you to interact with different groups of people. Much easier.