PWA are all about User Experience, or not?

Chris Wilson offered an excellent introduction to Progressive Web Applications, «Progressive Web Apps are the new AJAX» at View Source Berlin. Chris showed a slide where, after applying PWA principles to a web site, conversions increased even on iOS (by a not negligible 82%), a platform where neither Service Workers, Web Push or Application Manifest are supported.

During his talk, Chris put the emphasis on user experience, saying PWAs are all about UX, not technologies and this is precisely one of the most fascinating things PWAs are doing, changing the developer mindset about prioritizing UX.

Some days before, Alex Russell published an article titled “What, Exactly, Makes Something A Progressive Web App?”. However, the first half of the post is all about getting the “Add To Home Screen” dialog to be shown.

In his post, Alex highlights the importance of meeting some baseline criteria for your app to become an A+ progressive web app and do Chrome show the “Add to Home Screen” dialog. As you probably know, these constraints require the usage of certain technologies in an specific way: the developer must serve the page on HTTPS, use a Service Worker and link a manifest providing a minimal set of keys and specific values. All three are necessary (although not sufficient) for a web app to be reliable, to offer an offline experience and to be integrated with the mobile OS. Ultimately, all three characteristics contribute to a better user experience.

Reliability aside, like it or not, offline experience and platform integration can be achieved by using non-PWA-related technologies such as App Cache (ugly but still a standard and a cross platform one at that) and OS-proprietary meta-tags. Fortunately, high performance best practices, techniques and patterns are vendor-independent.

Although I can see the good intention behind technology requirements –the dialog is acting as a reward for developers minding UX–, in my opinion, Chrome is going too far not only guessing when users are deeply engaged with a web app, but determining what specific kind of web apps users can engage with. Notice that only if technical requirements are met, user engagement heuristics come into play. User engagement is measured but performance isn’t, performance is expected as a result of using specific technology.

Compare with app stores. We rant at Apple or Google for deciding which kind of content is appropriate and what is not but at least, once developers observe content policies and the app gets published, the install button is there for anyone finding their application. Targeting the Web now means that in Chrome, even if the developers figured out how to leverage user engagement, the users will never see the install button unless the app meets the technical constraints.

Don’t misunderstand me. I really think that Home Screen launchers / shortcuts / bookmarks / whatever are a good idea but if browsers are going to demand UX excellence from developers, they should do by measuring performance during user interaction, not by tying development to a trendy technology stack. Regardless of impositions, the ultimate indicator about engagement will come from the user. Browsers should focus on leveraging UX for users’ interests and not deciding what users should engage with.

I would love to see Mozilla, in its role of user advocate, taking a strong position on this matter by providing its own user-centric alternative.

Towards the Web of Trust

Recently, I exchanged some e-mails with Anne van Kesteren about security models for the Web. He wrote his thoughts down in an interesting post on his blog titled Web Computing. This is sort of a reply to that post with my own thoughts.

Today, 10,000 days after, the first published Web is still working (congratulations! ^^). 10,000 days ago, JavaScript did not even exist but now we are about to welcome ES6 and the Web has more the 12,000 APIs.

Latecomers Service Workers, Push and Sync APIs have revitalized web sites to compete with their native counterparts but if we want to leverage the true potential of the Web we should increase the number of powerful APIs to stand at the same level of native platforms. But powerful APIs imply security risks.

To encourage the proposal and implementation of these new future APIs, an effective, non-exploitable revocation scheme is needed.

In the same way we can detect and block certain deceptive sites, we could extend this idea to code analysis. Relying on the same arguments supporting the success of open source communities, perhaps a decentralized network of security reviews and reviewers is doable.

Captura de pantalla 2016-07-26 a las 10.06.25
I imagine a similar warning for web properties proven to be harmful.

This decentralized database (probably persisted by user agents) would declare the safeness of a web property based on the presumption of innocence which means, in roughly statistical terms, that our hypothesis should be «the web property is evil» and then try to find strong evidence to beat the null hypothesis «the web property is not evil».

Regardless the methodology chosen there will be two main categories: we have not enough evidence for a web to be declared harmful (or we are completely sure it is safe) and we have strong evidence for a web to be considered harmful. We should take some action in the latter and not alert the user in the former but, what should we actually do? Not sure yet, honestly.

For instance, in case of strong evidence, should the user agent prevent the user from accessing the web site? Or should it automatically revoke some permissions and query the user again? Could the user decide to ignore the warnings?

There are more gaps to fill starting by providing a formal definition of harmful. I.e., what does harmful mean in our context? Should a deceptive site be considered harmful from this proposal’s point of view? Consider a phishing site with a couple of input tags for faking Gmail login page with a simple POST form and no JavaScript at all… In my opinion, we should not focus on site’s honesty but in API abuse. We already have mechanisms for warning about deception and if you want to judge web site reputation, consider alternatives such as My Web Of Trust.

In his response, Anne highlighted the importance of not falling in the CA trap «because that means it’s again the browsers that delegate trust to third parties and hope those third parties are not evil«.

OpenPGP has the concept of ring (or web) of trust for a decentralized way to grant trustworthiness. What if instead of granting trustworthiness, UAs provide a similar structure to revoke it? Kind of issuing a mistrust certificate.

And finally, there is the inevitable problem related to auditing a web site. The browser could perform some static analysis and provide a per-spec first judgement but what would happen after? Can we provide some kind of platform to effectively allow decentralized reviews by users?

In my ideal world, I imagine a community of web experts providing evidences of API abuse, selecting a fragment of code and explaining why that snippet constitutes an instance of misuse or can be considered harmful, other users can see and validate the evidence. Providing this evidence would be a source of credit and recognition to the same extent as contributing to Open Source projects.

Another bunch of uncomfortable questions arise here, though. What if the code is obfuscated or simply minified? How does the browser track web site versions? Should the opinions of reviewers be weighted? By which criteria?

Well, that’s the kind of debate I want to start. Thoughts?


On Google plans for the Web

Now I’m talking publicly about progressive web apps and the future of Web development and the Internet, I’ve been asked a lot for my opinion about instant apps and, before instant apps, about why Google was trying to close the gap between the Web and native platforms, i.e. Android.

Actually, I don’t really know but despite the strategy would seem contradictory, after Google I/O it seems obvious to me: Google is running an experiment about switching its distribution platform. to bring more users to the search engine.

Why? Well, it makes sense that there should be a correlation between the time a user spends on Google search engine and the amount of money Google earns. So here is my assumption: given the marketplace is free from ads and most of the content is free, browsing the marketplace is not profitable or, at least, not as profitable as the Google search engine.

But we like applications, and to use applications implies some behaviour patterns, the result of all these years of mobile education. So, in order to make Web applications more appealing (and so, to increase the time a user spent in the search engine) why not to add native-application characteristics to Web applications?

In the other hand, it is Android. Another common question these days is: will we, Android developers, loose our jobs in the future? As if some of Android developers would feel threatened by progressive web apps in some way.

Well, honestly, I think no, at least in the near and mid time scenarios and Google has provided some extra warranties to make Android last for even more time. With Instant Apps, Google is closing the gap from the other end, it’s bringing to Apps what we like most from the Web: immediacy (no market, no download, no installation) and URLs so now we can search for Apps in Google engine. And that’s the point! No market, more time spent in Google search engine.

And why to bet for two approaches instead of one? Because it brings more users to the search engine and… they can (in terms of costs). If both initiatives succeed, both developer communities will be happy and users will spend their time into the most profitable distribution platform they can use: the Internet. Everybody win and the world is a wonderful place… for somebody. 😉

And that’s all folks! Notice that I could be totally wrong since all this is based on the premise that Google earns more money from the engine than from the market but there is some data to support my assumption out there (google it!) and, in the end, this is only speculation.

EDIT: I changed the nature of the experiment as it was not accurate enough. It is more clear now.