Most mobile applications I have worked on over the last few years have communicated with a server. I have used lots of server side implementations and, even though I mainly write for the phone, I have created a few server implementations myself.
The main decision at the start of a project is one of scalability. There tends to be three main strategies…
- Quick and easy. Companies taking this route implement using tools and technologies that are very easy to use or are already familiar to them. This produces a quick time to market but the solution is rarely scalable. Once the product is successful, the server side (and sometimes the phone side server communication) tends to get re-implemented.
The advantage of quick time to market allows the concept to be proven and shown to potential investors. The re-implementation is usually a very painful process because there are conflicting concerns such as improving performance and adding new features while supporting existing users.
Many companies (and individuals) follow the ‘quick and easy’ route out of ignorance rather than being aware of scalability.
- Design for the medium term. This involves common web and database technologies but using them such that they can be scaled. This usually means following standard industry practice of physically separating the data, business logic and web site so each can be improved as required. Even within these sub-sections it’s possible to identify critical performance areas and design/implement these so that they can be improved upon.
All this costs time and money and may not actually be fully utilised if your idea isn’t successful. Conversely, if you have a viral service that will quickly get to 100s of thousands of users that are frequently writing data then this solution won’t scale easily.
- Design for the long term. This involves cutting edge technologies or custom solutions. This includes cloud services such as Google’s BigTable and Amazon’s SimpleDB. These, in particular, can impose restrictive design practices on your application for example dealing with non-synchronous data updates (data may not get get written immediately) and much simplified data indexing. They can also get expensive when your application becomes popular.
As a very simple example, I recently estimated how much it would cost to use cloud services to host my blog. The bandwidth costs would far exceed what I pay for a dedicated server. But then you could argue that if your service is successful then you probably have the ability to pay.
If you look at the really successful services they tend to use more customised approaches. Many store all information in memory and only write updates to a conventional relational database. The database is really a backup in case the in-memory database goes down. This works well for services that are primarily read from rather than written to. The irony of this is that most relational databases already work this way i.e. Keep frequently accessed information in memory. My question to database vendors is why isn’t this as efficient as a home grown caching solution? Too much security checking, housekeeping and other such feature bloat (that gets dbms vendors users/sales) is probably the answer.