remap.js

I wrote this quick remap.js plug-in for require.js. Personally, I bundle my JS libraries with UMD but I love require.js for unit testing and with remap.js you can easily rewire your dependencies providing true isolation.

This is an early release and it can be bugged but you’ll get the idea. Enjoy!

Anuncios

Modern Mobile Web Development II: The Role of Workers and Offline Cache

This is the second post of a series of articles about how the several technologies conforming the New Gaia Architecture (NGA) fit together to speed the Web.

In the first chapter, we focused on performance and resource efficiency and we realised the potential conflict between the multi-page approach to web applications where each view is isolated in its own (no iframe) document and the need for keeping in memory all the essential logic & data to avoid unnecessary delays and meet the time constrains for an optimal user experience.

In this chapter I will explore web workers in its several flavours: dedicated workers, shared workers and service workers and how they can be combined to beat some of these delays. As a reminder, here is the breakdown from the previous episode:

  1. Navigate to a new document.
  2. Download resources (which includes the template).
  3. Set your environment up (include loading shared libraries).
  4. Query your API for the model.
  5. Combine the template with your model.
  6. Render the content.

Seguir leyendo “Modern Mobile Web Development II: The Role of Workers and Offline Cache”

Modern Mobile Web Development: Breaking the rules, beating delays, improving responsiveness and performance

I spent most part of this year working in Service Workers for the New Gaia Architecture (NGA) that Mozilla is preparing to release with Firefox OS 2.5 & 3.0. This is the first of a series of articles about the effort we are putting in evolving Gaia while contributing to best programming practises for Modern Mobile Web.

More than an architecture, NGA is a set of recommendations to reach specific goals, good not only for Firefox OS applications but for any modern web application: offline availability, resource efficiency, performance and continuity. Different technologies exists (and other are coming) to help reaching each target.

Although, there is some confusion about how all these technologies fit together. Even inside the Firefox OS team some see pinning apps (which is a mechanism to keep and entire site cached forever and it must not to be confused with the concept of pinning the web) overlaps with Offline Cache. Other people see the render store unnecessary and overlapping with the prerender technology. If this is your case or simply you did not know about these concepts, continue reading, you deserve an explanation first.

Seguir leyendo “Modern Mobile Web Development: Breaking the rules, beating delays, improving responsiveness and performance”

When to use ES6 proxies?

One of the most cool features of ES6 are proxies. Proxies are a special kind of objects to alter the semantics of other objects: access, deletion, setting, call semantics

As a meta-programming lover, I’m eager to use proxies for some fancy tricks. The other day I had the chance to use proxies and property getters while implementing an EventEmitter trait. I want to share the implementation to illustrate the differences between intercepting `get` semantics and using a `getter`.

Consider this:

function newEmitter() {
  var __handlers = new Symbol();
  return {
    on(type, handler) {
      this[__handlers][type].add(handler);
    },
    off(type, handler) {
      this[__handlers][type].delete(handler);
    },
    emit(type, ...args) {
      this[__handlers][type].forEach(handler => handler.call(this, ...args));
    },
    get [__handlers]() {
      var handlerSet = newHandlerSet();
      Object.defineProperty(this, __handlers, { value: handlerSet });
      return handlerSet;
    }
  };
}
function newHandlerSet() {
  return new Proxy(Object.create(null), {
    get(target, property) {
      if (!target[property]) { target[property] = new Set(); }
      return target[property];
    }
  });
}

Yep, the getter is gratuitous, it could be simply a computed property with the proxy but let do it this way for learning purposes.

Function `newEmitter()` creates an object ready to be mixed into an object prototype to use it as an event emitter. Methods `on()`, `off()` and `emit()` are trivially simple due to a couple of assumptions:

  1. The `__handlers` property will always be valid.
  2. Entries of the `__handlers` properties are always sets.

This allow us to avoid checkings on these properties and keep code simple.

Notice both functions, the getter and the get trap in the proxy give a solution for the same problems: to provide a default value for some properties. The difference is the nature of the knowledge we have about these properties. In case of the getter we want `__handlers` to default to a handler set so the getter overrides itself with an empty handler set. We specifically know which property we want to access in a special way and we implement the behaviour locally.

In case of the get trap we are altering not an specific property of an object but all the future properties. Actually, we are defining a new kind of object where all properties must be sets so we are changing the whole semantics of the object: as soon as we access a property, we obtain a set, if the property does not exist yet, we obtain an empty set. We implement this behaviour globally for all present and future properties.

You use getters to compute properties. While you use get traps to alter semantics. Do you know how to calculate an specific attribute? Use a getter for that attribute. Do you want to change the fact of accessing properties, in general, use the get trap.

Oh! And please, do not use proxies for controlling access or extending or restricting APIs, you already have inheritance and composition for that!

Proxies are really powerfull. I hope to show you more usages soon.

Static vs. Dynamic Typing

The other day I was invited to participate in the podcast of Nación Lumpen about the flaming topic Dynamic Typing vs. Static Typing. Luis Osa and I were there to argue in favor of dynamic typing, from Python and JavaScript perspectives respectively.

My first impression after the podcast was that we were not at the level to defend dynamic languages properly and I ended very surprised about the modern features of static typed languages.

Lots of things were told during the conversation and, in my opinion, lots of reasons arose to make static typed languages shine over dynamic ones. After 700 km of highway, I’ve got to sort my ideas and conclusions about the podcast. This is a long post, so be prepared!

Types are meaning

From the bare metal perspective, types are nothing. The pure hardware executing the programs in our devices understand only about memory addresses and data sizes. It does not perform any type-checking before running the assembly code, once the code it’s loaded, it’s on its own.

You start introducing types to mean something. Consider simple types in C: they are all about data sizes (int, word, byte), formats (float, double, pointer) and access (const) but you’re helping the compiler to create better target code: memory efficient, faster and safer. In addition, C gives us ways to combine simpler types into complex ones as well by using DEFINE macros and structured types. C is able to calculate each size, format and access type of the new abstraction preventing us from accessing invalid fields of a record or using them in incorrect places but, in addition, C makes new names to be charged with unique meaning (i.e two structures differing only in the name of the structure are actually different types) so we can abuse this feature to create abstract relationships.

This is what I think when talking about types. Types are (or at least, add) meaning. You use types as a way to perform a classification of data, to label some properties that some set of values should have and to establish relationships with other types.

Seguir leyendo “Static vs. Dynamic Typing”

Offliner: switch the Web offline

Offliner is a proof of concept about bringing offline web applications through service workers.

Service workers allows the developer to intercept resource requests to the network and take control of the fetching process. In addition with caches, service workers enable the Web to success where AppCache failed.

Offliner is a project aimed to provide a service worker intended to be as transparent and automatic as possible while keeping a lot of customization power. It tries to always fetch from network and fallbacks to cache when remote resources are unreachable for any reason. You can follow Offiliner on GitHub.