Towards publicly-hosted applications

It is often said that blockchain is a solution in search of a problem. Chromapolis is a response to problems that we have observed in the real world. This article intends to describe those problems and how we think it is possible to solve them. It is meant for a wide audience — blockchain skeptics and enthusiasts, programmers and end-users alike.

Problems with centralized applications

Aside from a few peer-to-peer applications, all online services that you interact with today are “centralized”. That is, they are hosted, run, and maintained by a single entity. In the most typical scenario, this entity is a for-profit company which desires to maximize its profits. This can be at odds with the desires and needs of its users and can be compared to the principal–agent problem: a service provider serves its users, but prioritizes its own interests even if they are in conflict with the interests of those users.

A for-profit company needs to address the needs of users to some extent — without users, there are no profits. But such a company is motivated to provide as little as is necessary to efficiently monetize its user base. Users likely don’t get the best service possible. The problems with this can be summarized as follows:

Lack of choice and features

A service owner does not need to cater to all of its users. In the case of services which have network effects (e.g. social media sites), users are often forced to accept things they find undesirable just to be able to access the network. For example, many people would rather just pay for the service Facebook provides to avoid ads. On the other hand, Facebook targets the majority of users who like the ad-supported service, those who would prefer to pay are forced to accept this model or be excluded from the network, thus Facebook is not motivated to provide a paid alternative to a minority of its users.

Information asymmetry

aerial photography of forest

Many services rely on the facts that users do not fully understand how the business operates. Services tend to provide as little information as required by law, and on top of that, users might not have time to read terms in full and analyze all possible consequences. For instance, most online services have suffered from a security breach at some point (Sony, Google, Facebook, Equifax, etc.), and the computer security experts they employ were likely aware that this was a risk if not an inevitability. The average user has an expectation that private data sent to a service will remain private; They do not expect that it is likely to be leaked.

Privacy violations

This is largely connected to information asymmetry. It is very common for online services to violate user privacy in a way which is detrimental to users. Users are generally not aware of how their data is used, and they have very little recourse in any case.

Censorship

As mentioned above, service providers are motivated to cater to the majority of its users. This means that when a user is suspected of some kind of abuse, there is an incentive to err on the side of caution and ban users merely on the suspicion of abuse. As long as the majority does no object, it is acceptable to unjustly exclude a minority of users. Services like YouTube and Facebook use bots to determine if certain content violates the terms of service, which, obviously, produces many false positives. While the use of a simple algorithm might be justified for initial screening, these services are sometimes overzealous and users do not have an option to dispute the decision. Human moderation is costly and adds little to revenues.

Reduced availability

Modern services have achieved very high levels of reliability, but users still don’t get the best deal possible. Service providers are motivated to spend as little as possible, it is likely that there is an understanding of the tolerance for outages among users, and this level becomes the target service level. Further, they centralize all data processing as they need to guard proprietary data and algorithms. Availability may well be inferior to a more distributed architecture with a higher level of replication.

Lack of interoperability

For-profit companies might seek to build monopolies as they are more profitable. Interoperability with rival systems typically does not help in creating monopolies, so dominant companies deliberately avoid it. The best example is probably the evolution of messaging softwares. Early protocols such as ICQ and MSN deliberately introduced breaking changes to disable alternative clients which offered users the convenience of connecting to multiple networks within one user interface. The XMPP protocol provided a standard for interoperability between compliant providers, including the now defunct Google Talk. After companies realized that their user bases were their most valuable resource, open standards were largely abandoned as they made it easier for users to migrate between platforms. It’s hard to find a big-brand messenger which still supports XMPP, even though this protocol is adequate for all textual messaging needs. It’s worth noting that even when Google Talk was still supported, Google made it hard to connect to normal non-branded XMPP servers.

Service discontinuation

gray concrete roadwayPhoto by Matt Lamers / Unsplash

It is very common that a service which can potentially be sustainable is shut down because it doesn’t generate enough profit. This might seem paradoxical, but VC-backed companies are typically expected to generate high return for investors, say, in $10-100M range. If a service is sustainable but does not generate significant profits, VCs will push it to acquisition (e.g. ‘acquihire’), after which the service is typically shut down. This also often happens when a large, mature company sees that a service it provides is not ‘big enough’ to be interesting. One of the prominent examples is Google Reader, which was a web feed aggregator provided by Google. It had a lot of users which found it very convenient, but was shut down when Google decided to focus on its core business. When a service is discontinued, users have to go through a hassle of finding an alternative (if it exists), moving their data and so on.

With all of the mentioned problems, it would be interesting to explore alternatives to centralized online services. Of course, we should not expect that some clever hack will make Google and Facebook irrelevant. The point is to find an alternative which can compete with centralized services at least in some niches.

Existing alternatives

Services hosted by non-profits and governments

Non-profits can be more aligned with their user base as they do not prioritize profits. Some successful examples exist, e.g. Wikipedia. The problem is that ultimately the user base is at mercy of the body which governs the non-profit. If governance is benevolent, everything is good. If it’s not, the service might be ruined. End users have little to no opportunity to influence this outcome, and no guarantees with the service which they signed up for will remain consistent in the future. A non-profit can also have significant overhead costs and no obvious way to support itself in the long term. The result is that there are not many online services run by non-profits.

Crowdfunding

One might think that if the public is financing creation of a service, it might have control over its operations, but this doesn’t seem to be the case, as is evident from multiple Kickstarter campaigns which ended with all the usual for-profit behavior. See the Oculus Rift.

Open-source software

people walking on the road during day time

Open-source software has addressed some of the problems associated with proprietary software — vendor lock-in, rent-seeking behavior, lack of interoperability — but this only works for software which end users install on their own computer and does not require a shared online service. If an online service is required, somebody needs to host the service, so this leads to a consequence that a single entity has a large influence on many people. Thus open source can be a part of the solution, but by itself it is not sufficient.

Peer to peer software

Peer to peer (P2P) software aims to create an online service through the joint effort of end-user computers. P2P software became a big success in file sharing and content delivery as this is a kind of workload which is great for pure P2P: content download can be trivially parallelized as any computer can download from any other, there is no great burden on peers as a peer can store as little as a single file fragment, and it is robust against malicious activity, in the worst case, a corrupt file fragment is served, which can be trivially detected. Most other applications can’t be served in a pure P2P fashion, as there are concerns about the consistency of dynamically updated data, potential attacks, and skewed incentives.

Blockchain

Bitcoin demonstrated that a service as complex and important as a payment network can function as a peer to peer application. Strictly speaking, Bitcoin isn’t pure P2P: while Satoshi Nakamoto introduced it as such, in practice we have several different classes of participants:

  • End-users using a wallet of some kind — they can send payments without working as a part of the network
  • Full nodes — most closely resembling ‘peers’ as they all are equal
  • Miners/mining pools — special ‘peers’ which coordinate the network, typically using vast amounts of specialized hardware

Thus we see much more complex structure and specialization than in pure P2P networks such as Gnutella. Nevertheless, it shares certain similarities with older forms of P2P software, e.g. open participation and decentralized mode of operation.

Bitcoin extended the P2P approach with its use of cryptography, consensus and replication. This allows Bitcoin to thwart various kinds of attacks and maintain consistency which is crucial for a ‘payment network’.

Right after Bitcoin was announced, people started to ask questions: Can this approach be used to make other kinds of P2P/decentralized applications?

People quickly figured out that it can, but trade-offs used in Bitcoin’s design only make it directly relevant to a relatively narrow range of applications.

Bitcoin design can be described as paranoid: it replicates data on a massive scale, makes no use of parallelization (e.g. no sharding), makes write operations scarce and expensive. This kind of design is warranted when we deal with something like money (a leading theory says that Bitcoin is not so much a payment network as it is ‘sound money’), but for few things otherwise.

The first service aside from money which was implemented using so-called blockchain (which is what people call Bitcoin-style P2P approach) was a decentralized name system — Namecoin. It was successful as a concept, but not successful in terms of popularity (perhaps a reason for that is that Namecoin itself is rather poorly engineered, it was created as a minimal modification of Bitcoin rather than something specifically designed to serve as a decentralized name system).

But what else can be built using the blockchain approach? As we can see with Ethereum, which generalizes Bitcoin-style approach into a universal computation platform, the most popular applications are services related to money and tokens (which in terms of technical features behave like money). Other applications suffer from blockchains resource limits, which are themselves defined by a ‘paranoid’ nature of the design.

Publicly hosted application example

Although we have established that there are no significant technical impediments to creating true publicly hosted applications, it is still not clear what exactly could be run in such a way, and whether it is actually feasible. Let’s consider a concrete example to make things more clear. Twitter has become an essential messaging platform for people all around the world. Important news is announced on Twitter, it is used by politicians, executives, and large companies. It is becoming as important as mail, and yet it is controlled by a single US company which can decide what posts to display, what to censor, what to prioritize, and who to ban. It is also hostile towards alternative clients.

Suppose a certain group of Twitter users cares about this so much that they would rather pay for a decentralized version of Twitter, let’s call it Dwidder. How can they achieve this? To analyze feasibility, we first need to define some metrics. As we are interested to know if publicly hosted apps can work at scale, let’s suppose the aforementioned group consists of a million people. Moreover, we need some kind of budget estimate, so let’s assume each user is willing to pay 1 US dollar per year for this service — less than the cost of a cup of coffee in many countries.

The group raises one million dollars between them. $500k goes into development. Experienced programmers can be hired for less than $10k/month, so the 500,000 can pay for 50 man-months, or 4 people working for a whole year. One might say — “But, Twitter has thousands of programmers!”. Yes, but these programmers mostly work on things like serving ads, analytics for advertisers and so on. Dwidder doesn’t need that. If we had a platform which made public app programming about as easy as normal programming, and could scale, 50 man-months are enough to make something rather full-featured and impressive.

Now on the hosting side. The hardware nowadays is so powerful that a single server (which might cost about $100 per month) can easily serve the needs of a million users, assuming those requests are spread over time. With a budget of $500k, an application can allow to pay 10 providers $4000 each, it seems that this calculation gives ample providers an ample profit margin.

“But wait, didn’t Twitter initially have major scaling issues which took years to resolve? This can’t be so simple, can it?”. Well, again, Twitter was solving a very different problem: They had to compute a list of tweets to show on a page on the server side. The server need to reply to a request in 50-100 ms which made it unfeasible to do a query for each subscription, thus Twitter had to perform multiple writes for each tweet to make it possible to serve pages quickly. Dwidder does not need to “serve pages quickly”, in fact, it doesn’t have to serve any pages at all. The list of messages displayed to a user can be computed on the client side and updated slowly over time. In other words, modern web stack can allow to greatly off-load the backend, moving parts of processing to the front-end.

Conclusion

Chromapolis posits a vision of reformed application hosting for the future of the internet, which supports new kinds of community backed applications which are truly aligned with the needs of their users. A more equitable business relationship between dapp entrepreneurs and users can lead to a healthier application ecosystem with accountability and transparency at its heart. Unlike open-source or non-profit initiatives, this does not mean that there is no money to be made. Rather, Chromapolis uses technology to address bottlenecks in the current architecture of the internet which have enriched the few at the expense of the many. Without the ability to seize and control these bottlenecks, those who wish to profit must innovate and compete to design applications which deliver value to their users without exploiting them. This is our vision for publicly hosted applications on Chromapolis. Can this decentralized architecture actually improve aspects such as privacy, unreasonable censorship, customizability? While we believe that the answer is “yes”, there is much research yet to be done. We intend to explore these questions in more detail in future articles.

Hayden P.

A blockchain and cryptocurrency enthusiast

You may also like...