Wednesday, January 16, 2013

Atlas - cross platform mobile platform

Cross platform mobile apps using HTML-5 have got a bad name recently since "biggest mistake" from Mark Zuckerburg and while I agree with the part about making big bets on HTML - 5 I still see this as the best approach we have so far for cross platform mobiles apps and would seem a perfectly fine for cases where cross platform is important or as a means to test adoption with a move to native when justifiable.

It is with the above in mind that I present Atlas

Atlas is complete mobile application environment with Test Driven Development. It brings together core web technologies into a environment that supports

  • TDD - utilizing jasmine.js
  • Backbone - JavaScript MVC
  • Mustache - logic-less templates
  • JQuery Mobile - HTML 5 based user interface system
  • PhoneGap/Cordova - cross device runtime platform
  • Server Push Framework - utilizing juggernaut

This is usable now but still under development as and when I get time.


Thursday, August 25, 2011

PHP WebSocket Server Framework

Finding a bit of gap in the market I decided to implement a PHP based WebSocket server with application framework in PHP. I did find other PHP implementations of Websockets but for the most part these were very simple implementations (often single classes to be modified) rather than a full framework that could be evolved with the standard (which is a moving target) and offer a simple application framework to enable Application protocols to be implemented in isolation from the relative turmoil underneath.

There are of course some excellent Websocket server implementations out there including the excellent PyWebSockets along with a number of very good Node.js offerings including Socket.IO. So the other reason for doing something in PHP was to get some experience below the WebSockets covers .

And to therefore I offer you the PHP-WebSockets-Server which is current hosted under GIT. I plan (as time allows) to evolve this along with the standard and if I can figure out how create an apache MOD for tighter integration. The server currently supports both draft 75 and 76 of the standard along with the newer HyBi protocol. As such it can deal with what most HTML5 clients (and Node.js clients) can through at it, although it has not yet have a rigorous amount of testing under its belt. There are also a few things missing from the shelf the main ones being support for extensions and chunked data. I hope to get to these in early Sept depending on work demands.

I don’t want to repeat what is written in the (Readme) documentation on GIT here but to give a brief outline of how it is architected there are three major levels.

Server Framework

The server framework really deals with how to run the server and to that end there are a couple of methods depending on what platform this is being run on. Basically it can be run from the command line on either Linux or Windows and there is a Daemon script for running as such on Linux.

Library Layer

The main implementation is supported in a PHP library which includes a main server class, interfaces and implementations to deal with the Websocket protocol versions (specifically handshake and data framing) as well as some utility classes.

Application Layer

The server is extended through the creation of Application protocols. All Application protocols must implement the WSApp interface which and is isolated from the protocol mechanics which may change over time. Application Protocols can be specified at the client using the Protocol argument of the WebSocket constructor for example in javascript the following

socket = new WebSocket(host, "Foo");

Would request the Foo application protocol.

The organization above allows for graceful evolution of the WebSockets standard whilst supporting various client interpretations without requiring changes to the application protocol.

As I mentioned above I will try to evolve this over time as I can and as major changes to the standard evolve. I intend to use it for my own prototyping needs and therefore will have some motivation to update.

Thursday, August 18, 2011

Cross Platform Mobile Applications using HTML5–Part 1

A couple of weeks ago I set myself a task of developing a mobile application using HTML5 that could be deployed onto as many popular mobile platforms as possible with the minimum amount of modification. This post describes that task and conclusions drawn from doing it.
HTML5_Logo_topFirstly I should describe what I mean by mobile application since HTML5 is typically used via a browser and a URL. In this case I mean a local application that can be bundled and deployed a from an App Store/Marketplace etc. and looks and feels the same as a local application. Recently there has been a lot of interest in using HTML5 to develop mobile applications in this way so I wanted to test the water.

Lots of options

A quick google search on the subject will find a lot of different technologies and frameworks that claim to provide just what you need to attempt what I was attempting. I set myself some basic design principles and set about verifying as many platforms as I could find. I am sure there are ones which I missed but what follows is the cast that I played with, all of them do what they say they can but there are tradeoffs and to be honest your choice will depend on the tradeoffs you want to make. So set yourself some guidelines. Mine where quite simple. Firstly I wanted to be able to have a lot of flexibility and as such wanted good access to HTML5/CSS and javascript and not be forced to live in one or the other. Whatever I chose had to support mobile device features such as touch, rotation, different viewport dimensions and access to device sensors. I also didn’t want to learn a complex new object model in order to develop my app so reuse of existing approaches was good thing. And finally I didn’t want to pay licensing.
I tested a number of multi-platform technologies such as Jo, Sencha, Titanium, and JQTouch as well a one (Enyo) that wasn’t billed as multi-platform, see my previous post for details. Firstly they all worked to a certain extent but for one reason (e.g lack of touch support) or another (e.g complex javascript framework) I ruled them out. I finally landed on JQuery Mobile. Although in beta and not as polished as something like Sencha touch it was the best option in that is satisfied all my criteria for a framework. As a framework it embraces HTML5 and CSS (rather than encapsulating it) while providing a powerful javascript framework that provides a lot of nice features for a mobile developer. Finally as it’s names suggests it is built-on and uses jQuery which means that a lot of current web developers will be able to easily take advantage of its features without skipping a beat.

Going Cross Platform

The application framework provides the basis for creating the look and feel of an HTML5 mobile application but they don’t help with creating cross platform applications such as I was interested in. They are fine for web based applications accessed via a browser but not a native app deployed and launched like a native app. Luckily the bit that is missing can be satisfied by a great piece of technology called Phone Gap. Phone Gap not only provides an API to access many of common sensors built into mobile platforms (e.g location, accelerometers etc.) it also provides the application shell support which enables you to repurpose your HTML5 look and feel into different mobile environments without (for the most part) changing a line of code. There might be other technologies that do the same but to be honest I stopped looking once I played with Phone Gap they have a fairly complete package which is well documented and integrated with the  XCode, Eclipse and Titanium Aptana IDE which became the environments I used the most during this task.

The process

So with my choice of application environment sorted now it came down to the task of choosing my app to build. I had decided that the actual app didn’t matter since it was the journey rather than the destination which was important. Once I had achieved what I set out to do, repeating it for other applications would be easier. So I actually found a JQuery Mobile Tutorial on how to build a feed reader. This tutorial demonstrates how to write a web service which has a UI generated by php files into HTML/CSS/JS pages which are loaded by the browser. All the page generation is done on the server and as such this tutorial created a web application in the traditional browser based model, however it is a good tutorial and it allowed me to both learn JQuery Mobile and provide a basis for my local application experiment.
I re-created the application to work without a service component using JQuery Mobile to generate the UI based on user events. The whole experience was packed into a single HTML file (thanks to the multi-page concept in JQM) and a single javascript file to drive the application logic. I then used PhoneGap to to provide the shell for applications which could used on devices. My first test case was to provide the feed reader app for both Android and IPhone, and I later expanded to WebOS. Using Phone Gap it would be a simple matter to  create an application for all the platforms PhoneGap supports as long as that platform supports HTML5.
The image above shows the same app running as a local app on three different platforms. On the right it is running in the Android emulator, bottom left it is running on the IPad emulator under XCode and finally in the background it is running in Chrome on the desktop (more on this later). This is the same HTML/CSS/JS code running on all platforms. The code is simply slotted into the App shell created by Phone Gap.


So I can claim success right? well, not exactly because although it is possible to do what I set out to do there are a few things that need to be improved before I feel this kind of thing is ready for prime time.
First in line is a decent development environment. I used a combination of Titanium Aptana and the Chrome browser. Aptana managed the source code while testing and debugging was done in Chrome (which is why I show it above). At the moment both are needed. This wasn’t ideal but it was possible however, not as seamless as you might like. Getting tighter integration between the development process and debugging process would make life a lot easier all around.
One area that really caught me out is catching errors in generated code (e.g. HTML5 or CSS). Inserting code into the DOM is almost inevitable doing this kind of activity and it can be (was for me) the source of a lot of the problems. Although Aptana does basic syntax checking and code completion you need to use validators (W3C HTML5 Validator and JSLint in my case) to make sure you have clean code.  Unfortunately the validators work off static code if there is an error in any of the dynamic code (a closing H2 tag in my case) there is no way to pick it up other than using the DOM browser (in chrome) and wading though looking for errors. This is very time consuming. So a validator that works off generated code would be great.
The second problem I had to deal with is that although the same code can run on all platforms and Phone Gap can be used to hide a lot platform differences there are still accommodations that need to be taken into account. One of the big ones that I had to deal with was the lack of a back-button on IOS, requiring me to generate a soft button when the app was running on IOS. This isn’t hard as phone gap provides an API to discover the platform but it is something to be aware of and requires testing
Having said the above, I am encouraged that this is actually possible it really opens up the whole mobile a little more and although there are going to be certain apps where pure native code is required there are an awful lot that can be coded in the same way as I have done. One comment I always get asked with respect to using HTML5 is performance, all I can say in this case is that when I deployed the feed reader app onto my device (a G2) it performed very well. The app loaded fast (much faster than the network version) and was responsive to user actions. There was a delay when retrieving feeds and feed entries but those were network delays and could be prevented by using caching and prefetch. As it is Jquery Mobile already manages a lot of that kind of thing.
The advantage of creating a local HTML5 app versus a traditional service based application is that other areas of HTML5 can be exploited perhaps the most important one from a mobile perspective is dealing with offline operation. Having the application local also cuts down on network access allowing for minimal amounts of information to be passed between a service and an app. These are areas I plan to deal with in subsequent posts.
For those interested you can get the source code from so you can play in your own pen . As to what is next, now I am interested in how having HTML5 on the client can and should effect the design of mobile applications I am eager look at both offline operation and the use of websockets to potentially reduce the battery impact of traditional polling apps. More on this later

Friday, July 1, 2011

Cross platform development with Enyo

I have been looking to use HTML5 for cross platform development on mobile devices and have probably tried just about every framework going and they all suffered from different problems from not handling screen resolutions to not supporting touch to not being mature enough. Today I managed to get access to the Enyo framework which is part of the WebOS 3.0 SDK.

I wont go into the details of the framework just now, but I went through the tutorial to create a simple feed reader app. I used Aptana to develop the application and debugged it using the Chrome browser. Once I had the App developed I used the Palm command line tools to throw the app onto the WebOs Emulator. You can see both browser and emulator running the same app below.


Everything works quite well and I decided to try my luck at running the same app on my Android G2 phone. In Eclipse I created a PhoneGap application and replaced the web asset directory with the same app structure as for WebOS version. Before I could run the app I needed to copy the Enyo runtime onto Android. This is not a small framework but was easily installed on the SDcard via USB. Then the only modification I needed to make was in the Index.html at the point where it pulls in the Enyo framework. In my case it went from:

<script src="..\..\..\..\Program Files (x86)/HP webOS/SDK/share/refcode/webos-framework/enyo/1.0/framework/enyo.js" launch="debug" type="text/javascript"></script>


<script src="content://" launch="debug" type="text/javascript"></script>

Then I used Eclipse to install and run the app and voila


It ran great, the performance wasn’t as snappy as I would have liked it but certainly very useable. The App scales very well from the large screen to the small. So far this looks like the most promising cross-platform HTML5 solution I have used so far. Now I have the process down I am interested in how far I can push this framework. Hope to post more on this. I am also interested if Enyo would run on iPhone/iPad. As I don’t have the development environment for that I would be interested to hear if anyone else has tried it.

Sorry for the short post, but it is friday evening and I need a beer Smile

Edit 7/5/2011

After a bit more hacking around in the Enyo API I have found a couple of areas which don’t work in a cross platform manner. This perhaps is not surprising since Enyo was never released as being cross platform in any way. The areas that would affect any application where the desire was to run on more than just Palm devices relate to the specific environment and user interface of those devices. I have found three such areas so far:

  • Services: Enyo provides access to a number of pre-defined services on the Device some of these work (e.g WebService) across platforms but the PalmService is specific to the device and does not exist on other platforms.
  • AppMenu: This one is a little trickier since it affects the UI portion of the Application. Although most platforms have the concept of an Application menu the AppMenu component does not travel well off the device. There is a mechanism to deploy it in a desktop browser (by pressing CNTL `) but I certainly could not find the equivalent on Android to make it appear and it certainly is not bound to the existing menu button on my G2.
  • Handling Back: This can easily be done by providing an application specific back button but a number of devices have an existing back button. Frameworks like Jo and JQM recognize this and provide support based on the platform capabilities. Enyo ignores the back button on Android device which makes it hard to use in multi-view applications where the learned action is to press the back button.

Thursday, August 26, 2010

Is the world ready for an Fphone

Last week Facebook launched it’s newest feature (in the US only at the moment) called “Places”. Clearly a shot across the bow of the growing upstart foursquare which has been gaining steady growth of users and has so far failed to secure a trademark on the notion of “Checking in”.

Facebook Places is an interesting but logical shift as the social network makes it’s first move towards the physical world. It is also the first step which really brings facebook into the mobile space and effectively opens up there application platform to Geo-social, Geo-spatial and location based services. The revamped API now exposes the “social graph” of it’s users to applications and devices that are authorized to use it. The well defined schema provides an interesting infrastructure to design a well integrated mobile experience.

Facebook is reported to have a community of around 500 million users and as such certainly represents a very key social fabric on the web. Given all of this I wonder if will start to see the notion of a facebook themed mobile device that provides the best mobile experience for interacting with that platform. Today facebook has been relegated to being an app which doesn’t make it a totally seamless experience to set status, post pictures and now expose your location on a mobile device. As a service Facebook exposes a really nice set of integrated social interaction tools which support subtly different ways to connect with friends and family.

I am not so sure I would to see an Fphone being developed in the same guise as the Gphone from Google but it might be an interesting as a future feature phone concept. What do others think?

Friday, August 6, 2010

Geocommunities - It’s a Social thing

Social has been the big thing over the last few years, Social media and Social networks springing up all over the place and bringing people and their together. Staying connected, hanging with your tribe, digital snacking are old and new behaviors that these networks have created and foster. Facebook is the dominant one, at least in North America and boast a large and growing user based. Delivering a mobile device without a good facebook and twitter client will invoke the wrath of the reviewers and the bloggers. New services appear that try to exploit a new facet of online social behavior but have one single driver, the community they are able to attract and from this one revenue source, advertising..

Perhaps it might be time to take a fresh look at the whole social network/media space, my personal take on this area offers a more holistic view which is segmented along the dimensions of time and space. These are dimensions which create both the basis and opportunity for humans to socialize and where digital media can enhance the social interaction. Using this model it is easy to see where existing social tools focus and where the whitespace is. The following diagram maps out the social model..


The dimensions shown above bisect the temporal and spatial boundaries of social interaction.

  • Same Time/ Different Place: This is clearly an area where technology has had significant impact. The ability to make the spatial divide disappear has been the main focus of telepresence, and conferencing technologies. The first technology was of course the telephone but we now see Halo rooms and mobile Apps that deliver an real-time AV experience for remote groups along with chat and IM programs all of which assume a same-time interaction model.
  • Different Time/Different Place: This has been the domain of messaging and mail oriented technologies. The immediacy of the interaction is not important but the information in the interaction is of great importance. In recent time the web has service to democratize information across the time and space divides and things like this blog also fall into this kind of social interaction.
  • Same Time/Same Place: Some people may think that this was just what existed before we had technology. But I feel this is actually an area where there is space to innovate. This innovations comes from the use of social media and also social commerce (more on this later)
  • Different Time/Same Place: This one might be harder to imagine but most physical bulletin boards fall into this category, even graffiti speaks to this dimension. More recently with things like geo-tagging and location based services there have been grass roots efforts to enable the place to play a role in social interactions independent of time.

Technology has clearly influenced the nature of social interactions in certain areas of the map above. However I would say the most of the effort has been placed along top row where the spatial divide is dominant. The thing that has driven this has been the natural human desire to maintain connectedness even when being away from the people they wish to connect with. What’s interesting is that typically what is going on is sharing. Connections are maintained through sharing information with a community of interest. Posting something on facebook is  only desirable if there is a large community of people (friends) to read and comment on it. This desire to share, to discuss, to recommend to provide advice and commentary would seem to be universal.

Perhaps what is missing from the lower row in the figure above where it is the time domain that changing rather than the place is the notion of community. Perhaps if there was a way to capture the community at a place irrespective of time that would enable similar social sharing and social media to occur as it does in the more traditional distance based online communities.

I am going to define the communities that address the social needs in the lower row of the figure above a Geocommunities. The community exists because of the place rather than a URL. We all belong to Geocommunities as we move around and spend time in other locations. Sometimes we are surrounded by people we know and sometimes by strangers or a mix. In either case the Geocommunity can act as a means of sharing, for example

  • Sharing of Media: Visiting a friends home and displaying images of your vacation while on their TV. Printing a recipe on there printer, or simply exchanging a URI they should see.
  • Sharing of behavior: Ever turned on a computer in a location and been confused about which network access point to connect to, or which printer to use, or where most people eat. This is community information that be shared (anonymously) and used to create a better experience

There are some applications that do some of the above but nothing on the scale of online social networks. The area is mosly untapped and it would seem that linking the communities that exist in the physical world with the ones the we belong to in the virtual world would extend the richness of applications and services and enable new social interactions and behaviors to develop.


It would seem to me that geocommunities and the support that would need to surround them would most logically come from a mobile solution company. Clearly in any place oriented solution the mobile device is a key component as it would be the instrument through which sharing is enabled and the lens through which spatial information would be discovered.

Friday, July 23, 2010

Context Matters

The nature of pervasive computing implies mobile, always on and in general ambient. At a recent meet-up in Palo Alto hosted at the Ubiquitous Media Studio the conversation centered a lot on sensory enhancement and control and I have always thought in human terms pervasive computing would be very much like a sixth sense. Today computing is attentive and task driven the ideas behind pervasive computing drive more towards an ambient model providing enhancement and augmentation to the way live work and play will pull on many area and require new paradigms and create new behaviors. The goal to provide experiences that are in the moment and related to the situation and needs, rather than a human response to an event.

There is some glue that is required in order to pull this all together and I keep coming back to “context” being a keep enabler. The ability of a system to understand, react and even predict context will enable the automation and delivery of in the moment experiences. Today that most people believe the Context is merely about location and the use of GPS coordinates to fix a physical position into which relevant services can be delivered. It is true that GPS is a extremely good example of Context but I don’t feel it is enough and to move beyond simply location based solutions we have to examine a richer model of context.

I view context as a set of dimensions onto which solutions are mapped, ability to pull information from those dimensions that matter to its function. I have identified 3 core dimensions

  1. Personal Context: This is perhaps best defined as “your stuff” it is what represents the human, which is composed identity or persona and the digital artifacts, data and services that are associated with that persona.
  2. Physical Context: This is closely related to location but is more than this, since it includes the artifacts and services that are around you. This might be physical objects or unseen virtual elements rooted in the place you currently exist
  3. Social Context: This is the current social group that is around you or are connected to. It is a real-time notion of community and could be made up of both friends and strangers.

Providing windows onto these dimensions will allow pervasive solution to draw information necessary create an experience which is relevant to the situation. There is another dimension which is time. Time affects the physical and social context more than the personal since the former are changing on a second by second basis. The constant changes in these dimensions may enable distinct patterns of activity to be determined and further exploited. The notion of digital diaries is not new and neither is the fact humans are creatures of habit, both of these can factor into supporting predictions of future situations which might increase to performance and efficiency of a system to react or prepare to a future tense.



The may be an aspiration goal and perhaps socially scary but would be a glue to supporting a computing model that is more attentive to us, rather than the other way around.