Tag Archives: JavaScript

Interning @ PayPal: Checkout A/B Testing, Developing Features, and Cracking Bugs

By

My internship at PayPal was a great experience. I was given real work that mattered. From day one, I had the opportunity to continuously write, commit, and push production level code that impacted the millions of people who use PayPal Checkout.

As a Software Engineering Intern on PayPal’s Checkout Guest and Signup team, I focused on building and iterating A/B tests to improve customers’ experiences and onboard new users. Within a few weeks of joining, I developed a solid enough understanding of our frontend and backend codebase to fix several critical production bugs: everything from updating password tooltip feedback to risk validation fixes. I was able to simultaneously work with design and product on implementing new features, fixing bugs, and iterating on A/B tests, all of which have been pushed to live.

Ex: A Simple Date of Birth Tooltip UI Upgrade

DOBTooltip_Snaheth

With one of our rockstar engineers, Ming Jin, I worked on an A/B test feature: Web Payments Standard Interstitial, which is a specific checkout experience to convert a guest user to a signed up user. I was able to spend a lot of time on this test, repeatedly pushing code as we kept ramping variant frequency.

WPSInterstitial2_Snaheth

In my last few weeks, among all the intern event madness and winning 2nd place at an internal dashboard hackathon, I didn’t know what else I would tackle.

After those event packed weeks, I started working on another A/B test that incentivized user signup to an even deeper level of regular AB tests by running text variants on these A/B tests themselves.

Working at a large company like PayPal, I realized that I had the chance to make a significant impact on on-boarding users through these A/B tests, even if impact seemed minuscule at first. Ramping our tests up and down, I began to saw the impact of my work as conversion rates would show up high after a successful release cycle.

I was fortunate enough to be included in the team’s work to the extent that I was able to work on the other side of code reviews: the side where I review other people’s code! This absolutely blew my mind! I never thought I would be reviewing other engineers’ code as an intern. My opinion was valued, my knowledge on the codebase was trusted, and my work ethic was recognized as one that really championed for the best possible product.

I feel really proud of what I accomplished this summer, especially after being recognized for my high performance. The impact of my work didn’t hit me until I bought a gift on Etsy and the beautiful experiences I helped curate showed up on my screen.

Shout out to my main mentors Vikram Somu, Karthik Chandrakanth, and Ming Jin for taking the time to guide me on all the things I did. Also thanks to Shruti Jain, Viswa Nachiappan, and Upendra Pigilam for being great team members and mentors. Finally, thanks to my manager, Stephen Westhafer, for ensuring I had the resources and guidance I needed to be successful! As a team, we had great moments: everything from office email pranks to pushing a successful release.

In addition, thanks to Mark Stuart and Daniel Brain for helping me on a variety of things from contributing open source to pushing this blog post.

You’ll see my work the next time you checkout with PayPal!

From Require.js to Webpack – Part 2 (The How)

By

This is the follow up to a post I wrote recently called From Require.js to Webpack – Part 1 (the why) which was published in my personal blog.

In that post I talked about 3 the main reasons my team decided to move from require.js to webpack:

  1. Common JS support
  2. NPM support
  3. a healthy loader/plugin ecosystem.

Despite the clear benefits in developer experience (DX) the setup was fairly difficult and I’d like to cover some of the challenges we faced to make the transition a bit easier.

From paths to alias to NPM

The first thing you do when you’re converting from require.js to webpack is you take your whole require.js configuration file and convert it to a webpack.config.js file.

In practice for us this meant addressing three areas:

  1. Tell webpack where you keep your JS files
  2. The enormous paths list in our require.js config
  3. All of the shim config

Module Path Resolution

The first thing you need to do is tell webpack where your JS files are. Require.js could usually infer this based on the <script> tag you used to set it up or you might have configured it using the baseUrl option. This is super easy to setup in webpack by adding the following to your config:

{
    resolve: {
        modulesDirectories: ['public/js']
    }
}

If you forget to set this webpack will look for your files in the node_modules directory or in a directory called web_modules.

Migrating Require.js paths to webpack aliases

Initially the conversion process is really straight forward.

Start with a require.js config like this:

requirejs.config({
    paths: {
        "backbone": "lib/backbone-1.1.0",
        "jquery": "lib/jquery-1.10.2",
        "underscore": "lib/lodash.underscore-2.3.0",
        "jqueryUI": "lib/jquery-ui.min"
    }
});

And it translates into the following webpack configuration:

module.exports = {
    resolve: {
        alias: {
                "backbone": "lib/backbone-1.1.0",
                "jquery": "lib/jquery-1.10.2",
                "underscore": "lib/lodash.underscore-2.3.0",
                "jqueryUI": "lib/jquery-ui.min"
        }           
    }
}

A Loader for Every Shim

The next thing, which took some special care, was fixing up our shim config. This was slightly harder than it looked, because it’s easy to forget what the shim’s are actually doing. Let’s revisit that for a moment.

A require.js shim takes modules that are not AMD-compatible and makes them AMD comptible by wrapping them in a little bit of code which will pull in the appropriate dependencies.

Let’e examine exactly how that works with the follow simple example (require.js config):

{
    shim: {
        "underscore": {
            exports: "_"
        },
        "backbone": {
            deps: ["jquery", "underscore"],
            exports: "Backbone"
        }
    }
}

Here we are applying shims for both underscore and backbone. For underscore the shim will wrap the library and then return the value of the _ variable for any scripts using it as a dependency. The backbone case is slightly more complicated:

  1. It wraps the library and exports the value of the Backbone variable.
  2. It makes sure that when evaluated backbone has access to both jquery and underscore

Let’s see how we would get the same setup using webpack loaders:

{
  module: {
    loaders: [
      { test: /underscore/, loader: 'exports?_' }
      { test: /backbone/, loader: 'exports?Backbone!imports?underscore,jquery' }
    ]
  }
}

Conceptually it’s not very much different than the previous version. The main difference being the way that you configure the loaders in a single line.

A few things to note:

  1. A webpack loader is more dynamic than a require.js shim and is actually a lot more like a require.js plugin.
  2. The test is a regular expression which matches against the full file path, so be careful to be specific!
  3. In order to use these loaders you need to install them npm install exports-loader imports-loader

Pretty much all of the common use cases you might bump into are covered in the webpack guide to shimming modules.

The Fun Part: NPM

So now that you’ve migrated all of your shim and paths config, let me suggest something: delete it.

Now go to NPM and install underscore, jquery, backbone, react. Whatever else you’re using is probably there as well and more than likely will work out of the box without any special loaders or aliasing.

That’s the magic of webpack. If you get your dependencies from NPM you don’t need any of that!

They will just work and you can go on building great apps not spending time carefully maintaining each dependency in your config files.

To add support for NPM to webpack, just make sure this is included in your webpack config:

{
    resolve: {
        modulesDirectories: ['public/js', 'node_modules']
    }
}

Migrating the Router

The heart of our app is this fancy router. When a new route comes in we would dynamically pull in the view associated with that path, instantiate it and re-render the main app view with the new page. It works pretty well. The main nav and everything are part of the main app, but each tab in the navbar had its own separate JS bundle and we’d only pull that in on an as-needed basis*.

This was by far the most challenging piece, mostly because my misunderstanding of how different the splitting/bundling technique is between webpack and require.js.

* This has changed for us somewhat recently, but hopefully the lesson will still be valuable for you!

Goodbye Manual Splitting

So we’ve talked a lot about our require.js config. Now we’re going to talk about our r.js (the require.js optimizer) config. All of the special information needed, in addition to what was already mentioned, to create our javascript build. Here’s the bulk of it:

{
    baseUrl: 'public/js',
    mainConfigFile: 'public/js/config.js',
    dir: '.build/js',
    modules: [
        { name: 'config' },
        { name: 'view/summary/index', exclude: ['config'] },
        { name: 'view/activity/index', exclude: ['config'] },
        { name: 'view/transfer/index', exclude: ['config'] },
        { name: 'view/wallet/index', exclude: ['config'] },
        { name: 'view/settings/index', exclude: ['config'] },
    ]
}

Most of this file is concerned with all of the different bundles we are making. With webpack we can simplify that greatly.

At first my instict was to replace this with the webpack entry concept. I spent a lot of time going down the wrong path worrying about entries. You probably don’t need that.

There is another approach that worked better for us:

  • Using async require() to create split points in our app and then letting webpack create the bundles automatically.

The Routing Code: Old and New

Here is the gist of the old require.js code:

function handleRouteChange(path) {
    require(['/views/' + path], function(PageView) {
        app.setView(new PageView());
    });
}

The first thing I tried to do when moving this to webpack was just leaving it. It seemed to work! The problem is that it only created a single bundle for all of the views. This means a signicantly larger payload for the browser.

Here’s our improved webpack solution:

function loadPage(PageView) {
    app.setView(new PageView());
}
function handleRouteChange(path) {
    switch (path) {
        case 'settings':
            require(['/views/settings'], loadPage);
            break;
        case 'transfer':
            require(['/views/transfer'], loadPage);
            break;
        case 'wallet':
            require(['/views/wallet'], loadPage);
            break;
    default:
        // you could either require everything here as a last resort or just leave it..
    }
}

Each time you use the AMD-style require([]) or require.ensure() webpack will see what you passed in there and go and create a bundle with every possible match. That works great if you pass in a single file, but when you use a variable, it might end up bundling your entire view folder. That’s why you need to use something like a switch statement to make sure that you declare your split points on a route-by-route basis.

You should probably grumble about this a bit, but just remember because you do this you don’t need any module or entry logic in your config. It will create all the bundles you need automatically, mostly automatically anyway 🙂

Entry or Async Require

Let me do my best to explain when you want an entry and when you want to rely on webpack to do the splitting for you:

  1. Use an entry if you want to include a new <script> tag in a page.
  2. Use AMD-style require([]) or require.ensure() if you want webpack to dynamically pull in additional modules.

I recommend reading Pete Hunt’s excellent webpack how-to on the subject of async loading for more information.

The trouble with our CDN

One of the last things that got us was our CDN. Everything worked fine in local development mode, but when we got to our live testing environments we noticed that it was trying to pull additional bundles off of the webserver by default and not the CDN where our JS actually lives in production. require.js handles this automatically by parsing the URL from the <script> on the page. Webpack doesn’t have this magic. If you have a non-standard place where you want it to pull down additional dynamic bundles you need to tell it:

__webpack_public_path__ = document.body.getAttribute('data-js-path') + '/apps/';

More info can be found here: https://github.com/webpack/docs/wiki/configuration#outputpublicpath

If we’d used webpack to handle our total build process we could let webpack apply the hash automatically, but since we’re still only using it for our JS build, we need to handle this programmatically. Probably one of my least favorite things about the setup. Not terribly hard, just a bit annoying to remember.

Build Size & Time

At this point we were pretty excited about everything. It was actually working, but we noticed that the file sizes were a bit higher, maybe even a lot higher than our old require.js builds.

Enter Display Reasons

webpack --display-reasons is one the most helpful things a build tool ever provided.

Run webpack with that flag and you’ll see:

  1. Every module that was added to your bundle
  2. How much space it’s taking up
  3. The filename and module format that included it

Here is the partial output from a recent build:

reasons

This is a goldmine of information. Which modules took up the most space? How did that get included in this bundle? All answered with this information!

Note: By default all of the files in NPM are excluded from the output. If you want to add that simply add the --display-modules flag to your query (as above) and now you can see every file that requires jquery (node_modules is shortened to ~) or other shared modules. It’s pretty awesome and it helped us find a little issue with moment.js.

Moment.js and the Ignore Plugin

When we first started this process we noticed moment.js was pulling in 900kb of files. In the minified output it was still around 100kb. After poking around a bit we came across this stackoverflow article: How to prevent Moment.js from Loading Locales with Webpack.

Becuase you’re now in a CommonJS environment Moment thinks it’s safe to load all of its locale information. It presumes that you’re on the server, not bundling JS for someone’s Windows XP box. Thankfully there’s a webpack plugin that will strip out those extra requests prety easily.

{
    plugins: [
        new webpack.IgnorePlugin(/^\.\/locale$/, [/moment$/]), // saves ~100k
    ]
}

Source maps

These can cause also your builds to appear huge. They are great and can really help with debugging, but make sure you know what you’re doing with them and make sure you’re not counting them against your total bundle size. Most browsers only download these if the developer console is open, so they don’t affect your customers.

However, if you set your source-map type to eval or eval-source-map the source-maps are bundled directly into your JS. Do not use this type of source map in your production build. Here’s our development config:

{
    devtool: 'source-map'
}

There is this whole huge section in the webpack docs about all the various kinds of source map options you can use here:
http://webpack.github.io/docs/configuration.html#devtool

Minification

To add minification with uglify js all you need to do is add this config:

{
    plugins: [
        new webpack.optimize.UglifyJsPlugin({minimize: true})
    ]
}

Conclusion

It was a long and storied journey to from require.js to webpack, but ultimately the entire team is reaping the benefits. One of the best features this article didn’t mention is the excellent babel-loader which made it very easy to start using ES6 features in our code today. The whole ecoystem and support for webpack have been fantastic. I hope this little guide helps your team avoid the hurdles we ran into. If you need any help feel free to leave me a note on twitter or checkout the excellent webpack chat room on gitter. Have a great day!

Isomorphic React Apps with React-Engine

By

Earlier this year, we started using react in our various apps at PayPal. For existing apps the plans were to bring react in incrementally for new features and start migrating portions of the existing functionality into a pure react solve. Regardless, most implementations were purely client side driven. But most recently, in one of the apps that we had to start from scratch, we decided to take a step forward and use react end to end. Given express based kraken-js as PayPal’s application stack, we wanted to incorporate react Views in JS or JSX to be the default template solution along with routing and isomorphic support.

Thus, a summary of our app’s requirements were

  1. Use react’s JSX (or react’s JS) as the default template engine in our kraken-js stack
  2. Support server side rendering and preserve render state in the browser for fast page initialization
  3. Map app’s visual state in the browser to URLs using react-router to support bookmarking
  4. Finally, support both react-router based UI workflows and simple stand alone react view UI workflows in kraken-js

As we started working towards the above requirements, we realized that there was a lot of boiler plate involved in using react-router along side with simple react views in an express app. react-router requires its own route declaration to be run before a component can be rendered, because it needs to figure out the component based on URL dynamically. But plain react views could be just rendered without any of those intermediate steps.

We wanted to take this boiler plate that happens at the express render time and simplify it using a clean api. Naturally, all things pointed to express’s `res.render`. The question became very clear now on how `res.render` can be used as-is without any api facade changes but support both react-router rendering and regular react view rendering.

Thus react-engine was born to abstract all of the complexities into the res.render!

So in simple terms, react-engine is a Javascript library for express based NodeJS web apps for rendering composite react views. The phrase composite react views reflects react-engine’s ability to handle rendering of both react-router based UI workflows and stand alone react views.

Render Example
// to run and render a react router based component
res.render('/account', {});
// or more generically
res.render(req.url, {});

// to render a react view
res.render('index', {});

Notice how the first render method is called with a `/` prefix in the view name. That is key to the react-engine’s MAGIC. Behind the scenes, react-engine uses a custom express View to intercept view names and if they start with a `/` then it first runs the react-router and then renders the component that react router spits out or if there is no `/` then it just renders the view file.

Setup Example
var express = require('express');
var engine = require(‘react-engine');

var app = express();

// react-engine options
var engineOptions = {
  // optional, if not using react-router
  reactRoutes: 'PATH_TO_REACT_ROUTER_ROUTE_DECLARATION' 
};

// set `react-engine` as the view engine
app.engine('.jsx', engine.server.create(engineOptions));

// set the view directory
app.set('views', __dirname + '/public/views');

// set js as the view engine
app.set('view engine', 'jsx');

// finally, set the custom react-engine view for express
app.set('view', engine.expressView);

In a nutshell, the code sets react-engine as the render engine with its custom express View for express to render jsx views.

react-engine supports isomorphic react apps by bootstrapping server rendered components onto the client or browser in an easy fashion. It exposes a client api function that can be called whenever the DOM is ready to bootstrap the app.

document.addEventListener('DOMContentLoaded', function onLoad() {
    client.boot(options, function onBoot(renderData) {
    });
});
The complete options spec can be found here. More detailed examples of using react-engine can be found here https://github.com/paypal/react-engine/tree/master/examples.

We love react at PayPal and react-engine helps us abstract out the boiler plate in setting up react to be used with react-router in isomorphic express apps and focus on writing the business logic.

Maintaining JavaScript Code Quality with ESLint

By

As a lead UI engineer on the consumer web team at PayPal I’ve often seen patterns of mistakes that repeated themselves over and over again. In order to put an end to the most egregious errors we started using JSHint early on in the project. Despite its usefulness in catching major syntax errors, it did not know anything about our code, our patterns or our projects. In trying to improve code quality across the whole consumer web team we needed to find a linting tool that let us teach it how we wanted to code.

Enter ESLint

ESLint was started in 2013 by Nicholas Zakas with the goal of building a customizable linting tool for JavaScript. With that goal in mind Nicholas decided to make each rule standalone and provided a mechanism to easily add additional rules to the system.

Here is a view of the ESLint directory structure:

Screen Shot 2014-12-11 at 9.51.01 AM

ESLint uses the Esprima parser to convert the source code it is linting into an abstract syntax tree. That tree is passed into each of the rules for further analysis. When a violation is found it is reported back up to ESLint and then displayed.

Understanding Abstract Syntax Trees

An abstract syntax tree (AST) is a data-structure that represents the meaning of your code.

Let’s use this simple statement as an example:

Screen Shot 2014-12-11 at 6.52.00 PM

That can be represented by the following syntax tree:

Screen Shot 2014-12-11 at 6.53.34 PM

When Esprima generates an AST from a piece code it returns an object. That object includes information not found in the original source, such as the node type, which proves useful later in the linting process.

Here is the Esprima generated AST for our earlier example:

Screen Shot 2014-12-11 at 9.54.54 AM

One of the most important pieces of information found here is the node type. In this example there are three types of nodes: VariableDeclarationVariableDeclarator and BinaryExpression. These types and other meta data generated by the parser help programs understand what is happening in the code. We’ll take advantage of that information as we learn to write custom rules for ESLint.

Building A Custom Rule

Here’s a simple example of a rule we use internally to prevent overriding a property we rely on in our express routes. Overriding this property led to several bugs in our code, so it was a great candidate for a custom rule.

Screen Shot 2014-12-11 at 9.58.56 AM

As you can see our rule is returning an object which takes as its key the name of the AST node we want to inspect. In our case we’re looking for nodes of type AssignmentExpression. We want to know when a variable is being assigned. As ESLint traverses the AST if it finds an AssignmentExpression it will pass that node into our function for further inspection.

Within our function we’re checking to see if that expression is happening as part of a MemberExpression. A MemberExpression occurs when we’re assigning a value to a property on an object. If that’s the case we explicitly check for the name of the object and the property and then call context.report() to notify ESLint when there has been a violation.

This is a simple example of the power and ease in which custom rules can be built using ESLint. More information about building custom rules can be found on the Working with Rules section of the ESLint home page.

Packaging Custom Lint Rules

ESLint allows you to reference custom rules in a local directory using the –rulesdir flag. You simply tell ESLint where to look for the rule and enable it in your configuration file (using the filename as the key). This works well if the rules are relevant to a single project, however at PayPal we have many teams and many projects which can benefit from our custom rules. Our preferred method is to bundle rules together as an ESLint plugin an install them with NPM.

To create an ESLint plugin you need to create a module which exports two properties: rules and rulesConfig. In our case we’re using the requireindex module to create the rules object based on the contents of our rules/ folder. In either case the key should match the name of the rule and the value should be the rule itself. The rulesConfig property, on the other hand, allows you to define the default severity for each of those rules (1 being a warning, 2 for an error). Any module defined in the node_modules folder with the name eslint-plugin-*  will be made accessible to ESLint.

Here is what our internal plugin looks like:

Screen Shot 2014-12-11 at 10.24.04 AM

If you search for “eslint-plugin” on the npm website there are many plugins provided by the community at large. These shared modules help us identify best practices and catch potential sources of bugs and can be brought in to any any of our projects.

Conclusion

We love ESLint at PayPal. We’ve built it into our continuous integration pipeline and have it integrated with our IDEs. We’re glad to see a vibrant community growing up around ESLint and hope to give back more in the future. As our own Douglas Crockford is fond of saying, “Don’t make bugs!” That’s pretty hard to do, but as least linting helps us catch many of them before they can do much harm.

PayPal Hosts Inaugural NodeDay

By

NodeDay logoSince the launch of krakenJS in September, there’s been a ton going on: great JavaScript and node.js conferences, meetups, hackathons, talks, trainings and other events. Good times, great people and useful information to be sure, but there seemed to be something missing.

Isaac Schlueter noted in his talk at NodeSummit 2013 how the Node user base has shifted drastically over the past year. Based on npm stats, he showed that peaks of activity switched from weekend nights, into a more familiar territory: Weekdays, nine to five.

The nametags on attendees at these conferences and events have changed. Instead of seeing individuals on a quest for personal knowledge, I’ve started seeing employees being sent by their companies to figure out how to make this node.js thing work for them.

node.js might have started as the sole domain of the curious, the bold, the weekend warriors who love to be on the bleeding edge of technology; those who go home after work to learn new stuff. Well, that is no longer the case. Node has been adopted within the industry.

As I’ve attended events, I’ve been struck by how much interest there was that PayPal started using node.js. While we were certainly not the first company to do so, we were among the biggest to take the plunge.

People are hungry for information. Topics like “How to write a Node App” are no longer sufficient. They want to hear “How to write a BIG Node App. And how to deploy it. And how to scale it.”

To get to where we are today on our Node journey, we’ve had to clear technical, political and cultural hurdles . We have some answers to these questions, but we’re not the only ones who do.

node.js is built around a thriving and vibrant open source community. Many individuals have poured their blood, sweat, and tears into it; but as this ecosystem continues to evolve we as a company also have a responsibility to be good corporate citizens and contribute to it. Open sourcing Kraken was a first step, but there is still more that we can do.

NodeDay was born out of a quick conversation, because the idea is such a natural fit for the times: Bring together people from pioneering companies and organizations that have embraced (or are thinking about embracing) Node and allow them to share information, best practices, advice, tips, tricks, and horror stories. Anything and everything that is relevant to the enterprise.

Last Friday we hosted the inaugural NodeDay with over 400 node.js enthusiasts in attendance. This conference was not aimed at individual developers. It was for the companies that see Node as a viable technology to embrace, but are not quite sure how to go about it; for those who are ready to move from toy projects and pilots to major rollouts.

While we don’t presume to have all the answers, we will contribute enthusiastically to the Node ecosystem. And we hope other companies will follow our lead. A stronger industry presence gives more credibility to Node – which will in turn benefit the industry.

Won’t you join us?

You can check out some of the presentations from our successful NodeDay here. Check back for more information on future NodeDays.

Node.js at PayPal

By

There’s been a lot of talk on PayPal moving to node.js for an application platform. As a continuation from part 1 of Set My UI Free, I’m happy to say that the rumors are true and our web applications are moving away from Java and onto JavaScript and node.js.

Historically, our engineering teams have been segmented into those who code for the browser (using HTML, CSS and JavaScript) and those who code for the application layer (using Java). Imagine an HTML developer who has to ask a Java developer to link together page “A” and “B”. That’s where we were. This model has fallen behind with the introduction of full-stack engineers, those capable of creating an awesome user interface and then building the application backing it. Call them unicorns, but that’s what we want and the primary blocker at PayPal has always been the artificial boundary we established between the browser and server.

Node.js helps us solve this by enabling both the browser and server applications to be written in JavaScript. It unifies our engineering specialties into one team which allows us to understand and react to our users’ needs at any level in the technology stack.

Early adoption

Like many others, we slipped node.js in the door as a prototyping platform. Also like many others, it proved extremely proficient and we decided to give it a go on production.

Our initial attempts used express for routing, nconf for configuration, and grunt for build tasks. We especially liked the ubiquity of express, but found it didn’t scale well in multiple development teams. Express is non-prescriptive and allows you to set up a server in whatever way you see fit. This is great for flexibility, but bad for consistency in large teams. Over time we saw patterns emerge as more teams picked up node.js and turned those into Kraken.js; it’s not a framework in itself, but a convention layer on top of express that allows it to scale to larger development organizations. We wanted our engineers to focus on building their applications and not just focus on setting up their environments.

We’ve been using kraken.js internally for many months now (we’ll be open sourcing it soon!) and our engineering teams are eager to finally bring the internal node.js applications we’ve built live.

Bringing node.js to production

Our first adopter of node.js in production wasn’t a minor application; it was our account overview page and one of the most trafficked apps on the website. We decided to go big, but we also mitigated that risk by building the equivalent Java application in parallel. We knew how to deploy and scale Java applications, so if anything went wrong with the node.js app, we could fall back to the Java one. This provided the setting for some interesting data.

Development

We started in January and it took us a few months to get the necessary infrastructure in place for node.js to work in PayPal, e.g. sessions, centralized logging, keystores. During this time we had five engineers working on the Java application. Two months in to the Java development, two engineers started working on the parallel node.js app. In early June they met at a crossroads, the applications had the same set of functionality; the node.js application, a smaller team with a two month delayed start, had quickly caught up. A few details stood out after we ran the test cases and both applications passed the same functional tests. The node.js app was:

  • Built almost twice as fast with fewer people
  • Written in 33% fewer lines of code
  • Constructed with 40% fewer files

This provided encouraging evidence to show that our teams could move faster with JavaScript. We were sold and made the decision to put the Java app on hold while we doubled down on the JavaScript one. The great news is that the Java engineers on the project, unsure about node.js in the beginning, delightfully moved over to node.js and are happily committing to a parallel work stream, providing us with double the productivity we were originally seeing.

Performance

Performance is a fun and debatable topic. In our case, we had two applications with the exact same functionality and built by roughly the same teams: one on our internal Java framework based on Spring and the other built on kraken.js using express, dust.js and other open source code. The application contained three routes and each route made a between two and five API requests, orchestrated the data and rendered the page using Dust.

We ran our test suite using production hardware that tested the routes and collected data on throughput and response time.

Node.js vs Java performance graph

You can see that the node.js application had:

  • Double the requests per second vs. the Java application. This is even more interesting because our initial performance results were using a single core for the node.js application compared to five cores in Java. We expect to increase this divide further.
  • 35% decrease in the average response time for the same page. This resulted in the pages being served 200ms faster— something users will definitely notice.

There’s a disclaimer attached to this data: this is with our frameworks and two of our applications. It’s just about as apples-to-apples a performance test as we could get between technologies, but your milage may vary. That said, we’re excited by what we’ve seen from node.js’ raw performance.

Future

All of our consumer facing web applications going forward will be built on node.js. Some, like our developer portal, are already live while others, like account overview, are in beta There are over a dozen apps already in some stage of this migration and we will continue to share data as more applications go live. This is an exciting time for engineering at PayPal!

If any of this sounds interesting come work for us!

Dust is eloquent

By

“Dust is eloquent”   – Benedict Cumberbatch in “Sherlock”
eloquent – Clearly showing feeling or meaning – Merriam Webster dictionary

In the inaugural post of the PayPal Engineering Blog, we introduced Dust as the UI templating solution for all of PayPal’s development environments (Node.js, Java, and c++).  That post really didn’t say a lot about the details of Dust. Today, let’s “be the UI engineer” and see how Dust lets us express ourselves in a natural way to convey the experience we want for the user. This will teach you a bit about Dust and, hopefully, encourage you to learn more.

UI engineers work in three separate technologies: HTML, JavaScript and CSS. Taken together, these three deliver the amazing range of experiences you see using a modern web browser. From the UI engineer’s perspective, HTML is the natural mode of expression. However, without dynamic data, all you have is a static page so some compromise must exist to weave dynamic material into an HTML page. Some solutions for incorporating data, like JSP, tend to be to “in your face” and obscure the HTML the developer is working on. A minimal footprint on the basic page that leaves it still viewable in editing tools is a plus. Dust strives to remain in the background, as much as possible.

Dust deals with just two inputs, a template (aka HTML page with some dust tags)  and dynamic data. The data part is constructed by the business logic and contains dynamic information for use with the template.  The data is in JSON, a natural format since Dust is a JavaScript-based template language. This choice turns out to be a good one for other reasons. The back-end logic that generates the JSON data, can equally well supply the data to a smart phone application to render in native mode, or it can supply it to a web client Ajax request to render to HTML local to the browser.

The developer adds Dust tags to the HTML defining how to incorporate values from the data. In many ways, Dust tags are akin to HTML tags. Instead of < and >, Dust tags are enclosed with { and }. Where HTML uses </name> for a closing tag, Dust uses {/name}.

Rather than a lengthy exposition, let’s use an example to see how a Dust template and associated data are processed. This example deals with presenting address information for a user.

Fill In the Blanks Example

Address Template:
<div>>{street}</div>
<div>{city}, {state}</div>

Data in JSON format:
{
"street": "234 First Avenue",
"city": "Anytown",
"state"": "CA"
}

The Dust template processor starts copying everything from the template to the output until it encounters something wrapped in braces; in this case that would be the string {street}. This is the simplest dust tag and is equivalent to a “fill in the blank” process.  Dust looks for “street” in the data and outputs the value in place of the {street} token. Applying the process to the entire template, the output will be:

<div>234 First Avenue</div><div>Anytown, CA</div>

Great, now we can write form letters. What happens if {name} cannot be found in the data? Simple, nothing is output and processing moves on. So what if the address data is part of a more complex user object like the following:

{
 "user": {
     "name": "Mary Smith",
     "address": {
         "street": "234 First Avenue",
         "city": "Anytown",
         "state": "CA"
     },
     "phone": "(650) 555-1212"
  }
}

In order to reference the street field, we need to use what Dust calls a path.

{user.address.street}

This will output the street value. Since this is the same general notation that a developer would use in JavaScript it feels natural. The previous address template looks like this when paths are used:

Address Template:
<div>{user.address.street}</div>
<div>{user.address.city}, {user.address.state}</div>

However, a UI engineer’s job is a lot more complicated than just filling in blanks in an HTML page. There are problems like generating a list of all accounts, showing alternate text when data is missing, and more. In the future posts, I’ll delve further into what dust offers the developer. Future topics will include

  • Handling lists of data
  • Logic-less templating. How viable is it?
  • Reuse with dust for the UI engineer
  • Security considerations for the UI engineer
  • Dealing with internationalized content

In the next post, I will cover “Handling lists of data”

Dust JavaScript templating, open source and more

By

A year ago we set out on an effort to revitalize PayPal’s user interface. As you may know, there are quite a few pages on our site with designs that date back years and it’s about time we got rid of them! To accomplish this, a large part of what we had to overcome was technical debt, and we needed a technology stack free of this which would enable greater product agility and innovation. With this in mind we set out to “free our UI bits”.

At the time, our UI applications were based on Java and JSP using a proprietary solution that was rigid, tightly coupled and hard to move fast in. Our teams didn’t find it complimentary to our Lean UX development model and couldn’t move fast in it so they would build their prototypes in a scripting language, test them with users, and then later port the code over to our production stack.

There are obvious issues with this approach, so we decided it was time to change things and came up with a few requirements:

  1. The templating must be decoupled from the underlying server technology and allow us to evolve our UIs independent of the application language
  2. It must be open source
  3. It must support multiple environments

Decoupling our templates

Templating was the cornerstone piece, but we didn’t need to dwell on it too much. Everyone is familiar with JavaScript templating nowadays and it has huge benefits with empowering both client and server side rendering, so it seemed like a given. Dust, Mustache, Handlebars: they all have similar approaches to JavaScript templating, each with their own pros and cons. In the end, we opted to partner with LinkedIn and go with Dust templates and have been happy to contribute back to the project where we can.

As it turned out, Dust was so successful and well received in the company that before we finished integrating into our framework we found ourselves with half a dozen teams using it. We found JavaScript templating to be a great compliment to a Lean UX iteration cycle and have been impressed with the speed that our teams can turn design ideas into code. Along the way we opted for a few other enhancements to our templating, including leveraging Bootstrap for initial concepts and using Bower to manage our components.

By the time Dust was officially released, all of our products were using it in place of JSP.

Open source all the things

We made a conscious decision to fully embrace open source in our UI. It sounds common sense, but enterprise-scale companies often have a habit of creating proprietary solutions — at least we’ve made this mistake in the past — and keeping them propriatary. There are three major downsides with this approach:

  1. Proprietary solutions create an artificial ramp up time for new employees to become productive
  2. It forces a portion of your team to focus their energy on building your solution rather than your product
  3. These solutions are rarely better than the ones available in the open source community

Through a bit of exploration the rest of our UI stack started to fall into place:

  • LESS was the CSS pre-processor we chose since it aligned well with our JavaScript stack and worked with Bootstrap
  • RequireJS was chosen as our JavaScript module loader due to it’s clear and descriptive nature
  • Backbone.js worked out as our client-side MV* framework
  • Grunt was being used for our UI build tasks, because, well, Grunt is just awesome
  • Mocha was starting to be explored for enabling CI driven workflows for our UIs

Multiple environments

One requirement we had was that the templating solution must support multiple environments. An interesting fact of being an internet company for more than a few years is that you tend to end up with multiple technology stacks. In our case, we have legacy C++/XSL and Java/JSP stacks, and we didn’t want to leave these UIs behind as we continued to move forward. JavaScript templates are ideal for this. On the C++ stack, we built a library that used V8 to perform Dust renders natively – this was amazingly fast! On the Java side, we integrated Dust using a Spring ViewResolver coupled with Rhino to render the views.

Node.js was also introduced into the company around this time. Since our teams already knew JavaScript the pure development speed we could achieve with node was highly desirable to our product agility. Plus, integrating Dust was trivial since they are both JavaScript.

Finally, let’s not forget the browser as a rendering environment. Dust allows us to easily perform client side rendering. This brings up an interesting point though: Rather than saying we’re going to render all pages in the browser because we can, we’re opting to do what makes sense for both your browser and our engineers. Client side rendering is great, but not always needed. You’ll notice on the new PayPal pages that the first page is always rendered on the server to bootstrap the app, but from there it’s up to the UI engineers to decide it they need client side rendering or not. We like this flexibility.

Live Dust apps

So, all of this is good, but where are we using Dust and the new technologies today? Well, it’s is actually live on a good portion of our website that’s been redesigned. This includes our:

  • Home pages
  • Checkout (currently being A/B tested)
  • New user signup
  • A lot more on the way!

What’s next

Our UI engineers are now happy. They can iterate quickly on production-style apps with their design counterparts and get those ideas out of their heads and in front of customers. We’re now starting to see a build, test, learn culture form, which is a long way from the waterfall model we traditionally had with our design partners, and hopefully our users start to notice as more of the website gets updated.

Changing the front end technologies at PayPal was a huge success, but we’re now looking to make another step. Node.js started to organically spread through the company as a part of these efforts and we will take a look at that in Part 2.

Finally, we’re hiring, so if any of this sounds interesting come work for us!