Tag Archives: Node.js

Isomorphic React Apps with React-Engine

By

Earlier this year, we started using react in our various apps at PayPal. For existing apps the plans were to bring react in incrementally for new features and start migrating portions of the existing functionality into a pure react solve. Regardless, most implementations were purely client side driven. But most recently, in one of the apps that we had to start from scratch, we decided to take a step forward and use react end to end. Given express based kraken-js as PayPal’s application stack, we wanted to incorporate react Views in JS or JSX to be the default template solution along with routing and isomorphic support.

Thus, a summary of our app’s requirements were

  1. Use react’s JSX (or react’s JS) as the default template engine in our kraken-js stack
  2. Support server side rendering and preserve render state in the browser for fast page initialization
  3. Map app’s visual state in the browser to URLs using react-router to support bookmarking
  4. Finally, support both react-router based UI workflows and simple stand alone react view UI workflows in kraken-js

As we started working towards the above requirements, we realized that there was a lot of boiler plate involved in using react-router along side with simple react views in an express app. react-router requires its own route declaration to be run before a component can be rendered, because it needs to figure out the component based on URL dynamically. But plain react views could be just rendered without any of those intermediate steps.

We wanted to take this boiler plate that happens at the express render time and simplify it using a clean api. Naturally, all things pointed to express’s `res.render`. The question became very clear now on how `res.render` can be used as-is without any api facade changes but support both react-router rendering and regular react view rendering.

Thus react-engine was born to abstract all of the complexities into the res.render!

So in simple terms, react-engine is a Javascript library for express based NodeJS web apps for rendering composite react views. The phrase composite react views reflects react-engine’s ability to handle rendering of both react-router based UI workflows and stand alone react views.

Render Example
// to run and render a react router based component
res.render('/account', {});
// or more generically
res.render(req.url, {});

// to render a react view
res.render('index', {});

Notice how the first render method is called with a `/` prefix in the view name. That is key to the react-engine’s MAGIC. Behind the scenes, react-engine uses a custom express View to intercept view names and if they start with a `/` then it first runs the react-router and then renders the component that react router spits out or if there is no `/` then it just renders the view file.

Setup Example
var express = require('express');
var engine = require(‘react-engine');

var app = express();

// react-engine options
var engineOptions = {
  // optional, if not using react-router
  reactRoutes: 'PATH_TO_REACT_ROUTER_ROUTE_DECLARATION' 
};

// set `react-engine` as the view engine
app.engine('.jsx', engine.server.create(engineOptions));

// set the view directory
app.set('views', __dirname + '/public/views');

// set js as the view engine
app.set('view engine', 'jsx');

// finally, set the custom react-engine view for express
app.set('view', engine.expressView);

In a nutshell, the code sets react-engine as the render engine with its custom express View for express to render jsx views.

react-engine supports isomorphic react apps by bootstrapping server rendered components onto the client or browser in an easy fashion. It exposes a client api function that can be called whenever the DOM is ready to bootstrap the app.

document.addEventListener('DOMContentLoaded', function onLoad() {
    client.boot(options, function onBoot(renderData) {
    });
});
The complete options spec can be found here. More detailed examples of using react-engine can be found here https://github.com/paypal/react-engine/tree/master/examples.

We love react at PayPal and react-engine helps us abstract out the boiler plate in setting up react to be used with react-router in isomorphic express apps and focus on writing the business logic.

Maintaining JavaScript Code Quality with ESLint

By

As a lead UI engineer on the consumer web team at PayPal I’ve often seen patterns of mistakes that repeated themselves over and over again. In order to put an end to the most egregious errors we started using JSHint early on in the project. Despite its usefulness in catching major syntax errors, it did not know anything about our code, our patterns or our projects. In trying to improve code quality across the whole consumer web team we needed to find a linting tool that let us teach it how we wanted to code.

Enter ESLint

ESLint was started in 2013 by Nicholas Zakas with the goal of building a customizable linting tool for JavaScript. With that goal in mind Nicholas decided to make each rule standalone and provided a mechanism to easily add additional rules to the system.

Here is a view of the ESLint directory structure:

Screen Shot 2014-12-11 at 9.51.01 AM

ESLint uses the Esprima parser to convert the source code it is linting into an abstract syntax tree. That tree is passed into each of the rules for further analysis. When a violation is found it is reported back up to ESLint and then displayed.

Understanding Abstract Syntax Trees

An abstract syntax tree (AST) is a data-structure that represents the meaning of your code.

Let’s use this simple statement as an example:

Screen Shot 2014-12-11 at 6.52.00 PM

That can be represented by the following syntax tree:

Screen Shot 2014-12-11 at 6.53.34 PM

When Esprima generates an AST from a piece code it returns an object. That object includes information not found in the original source, such as the node type, which proves useful later in the linting process.

Here is the Esprima generated AST for our earlier example:

Screen Shot 2014-12-11 at 9.54.54 AM

One of the most important pieces of information found here is the node type. In this example there are three types of nodes: VariableDeclarationVariableDeclarator and BinaryExpression. These types and other meta data generated by the parser help programs understand what is happening in the code. We’ll take advantage of that information as we learn to write custom rules for ESLint.

Building A Custom Rule

Here’s a simple example of a rule we use internally to prevent overriding a property we rely on in our express routes. Overriding this property led to several bugs in our code, so it was a great candidate for a custom rule.

Screen Shot 2014-12-11 at 9.58.56 AM

As you can see our rule is returning an object which takes as its key the name of the AST node we want to inspect. In our case we’re looking for nodes of type AssignmentExpression. We want to know when a variable is being assigned. As ESLint traverses the AST if it finds an AssignmentExpression it will pass that node into our function for further inspection.

Within our function we’re checking to see if that expression is happening as part of a MemberExpression. A MemberExpression occurs when we’re assigning a value to a property on an object. If that’s the case we explicitly check for the name of the object and the property and then call context.report() to notify ESLint when there has been a violation.

This is a simple example of the power and ease in which custom rules can be built using ESLint. More information about building custom rules can be found on the Working with Rules section of the ESLint home page.

Packaging Custom Lint Rules

ESLint allows you to reference custom rules in a local directory using the –rulesdir flag. You simply tell ESLint where to look for the rule and enable it in your configuration file (using the filename as the key). This works well if the rules are relevant to a single project, however at PayPal we have many teams and many projects which can benefit from our custom rules. Our preferred method is to bundle rules together as an ESLint plugin an install them with NPM.

To create an ESLint plugin you need to create a module which exports two properties: rules and rulesConfig. In our case we’re using the requireindex module to create the rules object based on the contents of our rules/ folder. In either case the key should match the name of the rule and the value should be the rule itself. The rulesConfig property, on the other hand, allows you to define the default severity for each of those rules (1 being a warning, 2 for an error). Any module defined in the node_modules folder with the name eslint-plugin-*  will be made accessible to ESLint.

Here is what our internal plugin looks like:

Screen Shot 2014-12-11 at 10.24.04 AM

If you search for “eslint-plugin” on the npm website there are many plugins provided by the community at large. These shared modules help us identify best practices and catch potential sources of bugs and can be brought in to any any of our projects.

Conclusion

We love ESLint at PayPal. We’ve built it into our continuous integration pipeline and have it integrated with our IDEs. We’re glad to see a vibrant community growing up around ESLint and hope to give back more in the future. As our own Douglas Crockford is fond of saying, “Don’t make bugs!” That’s pretty hard to do, but as least linting helps us catch many of them before they can do much harm.

Excellent Opportunities for Node.js Engineers

By

As mentioned in a previous post, we have a number of openings for Web Application Engineers (Node.js/front-end engineers). So what kind of opportunities do we have? Well, here are a few:

Payments Acceptance Team

Location: San Jose

The Payments Acceptance team is seeking Node Web App engineers to make it easy for merchants to accept payments no matter where those interactions happen. This engineering team sits at the intersection of consumer, developer and business and will be building brand new experiences, smart lifecycle business rules, and deep service connections throughout the whole PayPal system. Their key goal is to drive growth for individual merchants.

Login Application Development Team

Location: San Jose

The Login application development team is looking for passionate full stack node application developers to reinvent the login experience for Paypal. Updating the login interface for Paypal Customers, utilizing the latest security and risk measures, all with a laser focus on performance and scaling. In this role you have the opportunity to explore innovative solutions to improve our performance, security and user experience.

Regional Solutions and Strategic Partnerships Team

Locations: San Jose, Austin, Singapore, Bangalore, Chennai, and Berlin

The Global Strategic Partnership team is focused on integrating PayPal into complex vertical markets, working closely with partners and business units across eBay, Inc. and across the world.

They are currently working aggressively to:

  • Leverage existing PayPal products in new ways
  • Partner with core product teams to drive and tune their roadmap to support our segment needs
  • “Run ahead” and develop new functionality that extends PayPal’s platform and services, while still delivering long-term solutions in-line with our strategy and target architectures.

Key segments for focus:

  • Global and super regional banks and financial entities
  • Social networking, messaging, and commerce platforms
  • Mobile carriers + key hardware partners
  • Large integrated software + hardware platforms
  • Global point of sale systems and services partners

PayPal Here and Mobile In-Store Point of Sale (POS) Team

Location: San Jose

As PayPal expands to retail markets, it is critical to enable merchants and partners’s success when using PayPal Here or Mobile In-Store POS as their payment methods.

One of our teams, PayPal Retail Engagement, is building applications and services that do just that, which is managing merchant’s lifecycle from onboarding, ordering devices, setup to inventory management, and reporting. They are using a variety of technologies including Node.js, Java and Objective-C (hybrid development).

Kraken.js Team

Location: San Jose, Austin, and Boston

We launched the Kraken.js open source project back in November. Since then a number of people have contributed and a number of companies are now using modules from it as part of their product stacks. How would you like to work on the Core Engineering team helping to drive Kraken forward? We are building out a suite of new modules (specialization, more internationalization, testing, security and other frameworks). If you have the chops, then come be a part of this team.

In addition, if you have a knack for evangelism, public speaking, writing awesome example code and interactive documentation, then we have a specific role for you leading our documentation/evangelism around Kraken.js

Internationalization Framework Team

Location: San Jose

Interested in making a difference in almost every country on the planet, allowing buyers and sellers around the world to have a natural and intuitive user experience, and establishing new standards for the rest to follow? All the while designing and developing in Node, JavaScript, and Java – then we have several roles for you in the internationalization team. Join a team of experts in this space and bring your Node.js skills to the table. The impact is worldwide.

Checkout Products Team

Location: San Jose

We have been entirely revamping our core checkout products which actually is what led the charge for piloting Node.js and LeanUX. A majority of all of PayPal’s revenues runs through this product. Talking about impact! Come be a part of the core of payment innovation.

Platform Engineering Team

Location: San Jose and Austin

Platform Engineering is responsible for the APIs and web services used by internal and external developers to enable payments anytime, anywhere and anyway. This Platform processes billions of requests each month, hundreds of billions of dollars of payment volume each year, and enables PayPal and its partners to rapidly innovate on new payment scenarios and enable new experiences.

This is a small, focused team that  has visibility into the breadth of PayPal as well as the payment applications (mobile, point of sale, web, TV, etc.) that are developed on top of it. There is also a second team that is also is building a world class developer portal that provides a great experience in finding services and APIs for both internal and external developers.

Come be a part of one of our platform engineering teams.

Funding Mix Team

Location: San Jose

This year will be an exciting year for the Paypal Funding Mix team. They are looking for full stack engineers to rethink the product. The interesting challenge is to create the best possible experience while tuning the revenue model. This product has one of the most significant impacts on the overall profitability for Paypal. This is an awesome opportunity to really understand Paypal products and influence the consumer payment experience while growing your Node.js skills at scale.

Interested?

If you are interested, please contact us at nodejs@paypal.com.

 

Outbound SSL Performance in Node.js

By

When browsing the internet, we all know that employing encryption via SSL is extremely important. At PayPal, security is our top priority. We use end-to-end encryption, not only for our public website, but for all our internal service calls as well. SSL encrypton will, however, affect Node.js performance at scale. We’ve spent time tuning our outbound connections to try and get the most out of them. This is a list of some of those SSL configuration adjustments that we’ve found can dramatically improve outbound SSL performance.

SSL Ciphers

Out of the box, Node.js SSL uses a very strong set of cipher algorithms. In particular, Diffie Hellman and Elliptical Curve algorithms are hugely expensive and basically cripple Node.js performance when you begin making a lot of outbound SSL calls at default settings. To get an idea about just how slow this can be, here is a CPU sample taken from a service call:

918834.0ms 100.0% 0.0 node (91770)
911376.0ms 99.1% 0.0   start
911376.0ms 99.1% 0.0    node::Start
911363.0ms 99.1% 48.0    uv_run
909839.0ms 99.0% 438.0    uv__io_poll
876570.0ms 95.4% 849.0     uv__stream_io
873590.0ms 95.0% 32.0       node::StreamWrap::OnReadCommon
873373.0ms 95.0% 7.0         node::MakeCallback
873265.0ms 95.0% 15.0         node::MakeDomainCallback
873125.0ms 95.0% 61.0          v8::Function::Call
873049.0ms 95.0% 13364.0       _ZN2v88internalL6InvokeEbNS0
832660.0ms 90.6% 431.0          _ZN2v88internalL21Builtin
821687.0ms 89.4% 39.0            node::crypto::Connection::ClearOut
813884.0ms 88.5% 37.0             ssl23_connect
813562.0ms 88.5% 54.0              ssl3_connect
802651.0ms 87.3% 35.0               ssl3_send_client_key_exchange
417323.0ms 45.4% 7.0                 EC_KEY_generate_key
383185.0ms 41.7% 12.0                ecdh_compute_key
1545.0ms 0.1% 4.0                    tls1_generate_master_secret
123.0ms 0.0% 4.0                     ssl3_do_write
...

Let’s focus on key generation:

802651.0ms 87.3% 35.0 ssl3_send_client_key_exchange
417323.0ms 45.4% 7.0 EC_KEY_generate_key
383185.0ms 41.7% 12.0 ecdh_compute_key

87% of the time for the call is spent in keygen!

These ciphers can be changed to be less compute intensive. This is done in the https (or agent) options. For example:

var agent = new https.Agent({
    "key": key,
    "cert": cert,
    "ciphers": "AES256-GCM-SHA384"
});

The key here is exclusion of the expensive Diffie-Hellman algorithms (i.e. DH, EDH, ECDH). With something like that, we can see a dramatic change in the sample:

...
57945.0ms 32.5% 16.0 ssl3_send_client_key_exchange
28958.0ms 16.2% 9.0 generate_key
26827.0ms 15.0% 2.0 compute_key
...

You can learn more about the cipher strings from openSSL documentation.

SSL Session Resume

If your server supports SSL session resume, then you can pass sessions in the (undocumented as yet) https (or agent) option session. You can also wrap your agent‘s createConnection function:

var createConnection = agent.createConnection;

agent.createConnection = function (options) {
    options.session = session;
    return createConnection.call(agent, options);
};

Session resume will decrease the cost of your connections by performing an abbreviated handshake on connection.

Keep Alive

Enabling keepalive in agent will mitigate SSL handshakes. A keepalive agent, such as agentkeepalive, can ‘fix’ Node’s keepalive troubles but is unnecessary in Node 0.12.

Another thing to keep in mind is agent maxSockets, where high numbers can result in a negative performance impact. Scale your maxSockets based on the volume of outbound connections you are making.

Slab Size

tls.SLAB_BUFFER_SIZE determines the allocation size of the slab buffers used by tls clients (and servers). The size defaults to 10 megabytes.

This allocation will grow your rss and increase garbage collection time. This means performance hits at high volume. Adjusting this to a lower number can improve memory and garbage collection performance. In 0.12, however, slab allocation has been improved and these adjustments are no longer necessary.

Recent SSL Changes in 0.12

Testing out Fedor’s SSL enhancements.

Test Description

Running an http server that acts as a proxy to an SSL server, all running on localhost.

v0.10.22

Running 10s test @ http://127.0.0.1:3000/
20 threads and 20 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 69.38ms 30.43ms 268.56ms 95.24%
Req/Sec 14.95 4.16 20.00 58.65%
3055 requests in 10.01s, 337.12KB read
Requests/sec: 305.28
Transfer/sec: 33.69KB

v0.11.10-pre (build from master)

Running 10s test @ http://127.0.0.1:3000/
20 threads and 20 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 75.87ms 7.10ms 102.87ms 71.55%
Req/Sec 12.77 2.43 19.00 64.17%
2620 requests in 10.01s, 276.33KB read
Requests/sec: 261.86
Transfer/sec: 27.62KB

There isn’t a lot of difference here, but that’s due to the default ciphers, so let’s adjust agent options for ciphers. For example:

var agent = new https.Agent({
    "key": key,
    "cert": cert,
    "ciphers": "AES256-GCM-SHA384"
});

v0.10.22

Running 10s test @ http://localhost:3000/
20 threads and 20 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 59.85ms 6.77ms 95.71ms 77.29%
Req/Sec 16.39 2.36 22.00 61.97%
3339 requests in 10.00s, 368.46KB read
Requests/sec: 333.79
Transfer/sec: 36.83KB

v0.11.10-pre (build from master)

Running 10s test @ http://localhost:3000/
20 threads and 20 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 38.99ms 5.96ms 71.87ms 86.22%
Req/Sec 25.43 5.70 35.00 63.36%
5160 requests in 10.00s, 569.41KB read
Requests/sec: 515.80
Transfer/sec: 56.92KB

As we can see, there is a night and day difference with Fedor’s changes: almost 2x the performance between 0.10 and 0.12!

Wrap Up

One might ask “why not just turn off SSL, then it’s fast!”, and this may be an option for some. Actually, this is typically the answer I get when I ask others how they overcame SSL performance issues. But if anything enterprise SSL requirements will increase rather than decrease; and although a lot has been done to improve SSL in Node.js, performance tuning is still needed. Hopefully some of the above tips will help in tuning for your SSL use case.

PayPal Hosts Inaugural NodeDay

By

NodeDay logoSince the launch of krakenJS in September, there’s been a ton going on: great JavaScript and node.js conferences, meetups, hackathons, talks, trainings and other events. Good times, great people and useful information to be sure, but there seemed to be something missing.

Isaac Schlueter noted in his talk at NodeSummit 2013 how the Node user base has shifted drastically over the past year. Based on npm stats, he showed that peaks of activity switched from weekend nights, into a more familiar territory: Weekdays, nine to five.

The nametags on attendees at these conferences and events have changed. Instead of seeing individuals on a quest for personal knowledge, I’ve started seeing employees being sent by their companies to figure out how to make this node.js thing work for them.

node.js might have started as the sole domain of the curious, the bold, the weekend warriors who love to be on the bleeding edge of technology; those who go home after work to learn new stuff. Well, that is no longer the case. Node has been adopted within the industry.

As I’ve attended events, I’ve been struck by how much interest there was that PayPal started using node.js. While we were certainly not the first company to do so, we were among the biggest to take the plunge.

People are hungry for information. Topics like “How to write a Node App” are no longer sufficient. They want to hear “How to write a BIG Node App. And how to deploy it. And how to scale it.”

To get to where we are today on our Node journey, we’ve had to clear technical, political and cultural hurdles . We have some answers to these questions, but we’re not the only ones who do.

node.js is built around a thriving and vibrant open source community. Many individuals have poured their blood, sweat, and tears into it; but as this ecosystem continues to evolve we as a company also have a responsibility to be good corporate citizens and contribute to it. Open sourcing Kraken was a first step, but there is still more that we can do.

NodeDay was born out of a quick conversation, because the idea is such a natural fit for the times: Bring together people from pioneering companies and organizations that have embraced (or are thinking about embracing) Node and allow them to share information, best practices, advice, tips, tricks, and horror stories. Anything and everything that is relevant to the enterprise.

Last Friday we hosted the inaugural NodeDay with over 400 node.js enthusiasts in attendance. This conference was not aimed at individual developers. It was for the companies that see Node as a viable technology to embrace, but are not quite sure how to go about it; for those who are ready to move from toy projects and pilots to major rollouts.

While we don’t presume to have all the answers, we will contribute enthusiastically to the Node ecosystem. And we hope other companies will follow our lead. A stronger industry presence gives more credibility to Node – which will in turn benefit the industry.

Won’t you join us?

You can check out some of the presentations from our successful NodeDay here. Check back for more information on future NodeDays.

Open sourcing kraken.js

By

kraken-logoIt wasn’t far into our move to node.js that we began to notice an opportunity to contribute back to the community. There were plenty of web application frameworks out there, but items like localization, country adaption, security, and scalability for large development teams were largely missing. We deal with money, and we do it in 193 markets covering 80 languages and 26 different currencies. That’s a lot of complexity and requires multiple teams to develop. Kraken was created to make this process easier.

What kraken offers

Kraken uses the popular express web application framework as a base platform and adds environment-aware and dynamic configuration, advanced middleware capabilities, application security, and lifecycle events. These features make Kraken ideal for enterprise-size companies where consistency across teams is needed, but also useful for node.js beginners who want to focus on building their application and not the application’s framework.

Pre-configured, but customizable

All of the technologies you need to build a web application are pre-configured and stitched together for you by generator-kraken. Creating a new kraken app is as easy as running yo kraken and answering a few questions.

By default, this scaffolding includes dust for templates, LESS for CSS preprocessing, RequireJS for JavaScript modules, and Grunt for task handling. This is our recommended setup, but using different technologies is supported as well.

Configuration

If you’ve used express before you’ve probably written code to configure how your cookies are parsed, if you have a favicon or not, how you’re accepting multipart forms, etc. It’s extremely flexible, but that code can add complexity and, more importantly, if your applications are spread across teams they’re not guaranteed to be doing it the same way. Kraken solves this by moving this setup out of the code and into configuration files.

Configuration example

Application and middleware configuration is stored in JSON files, creating a consistent implementation and removing the need for tribal knowledge when configuring items, e.g. does bodyParser need to come before cookieParser or vice-versa?

These files are also environment-aware, so overriding values when you’re in development, debug, or test mode is easy. To override a value in config/app.json for development you would create config/app-development.json with the delta and then start your app using NODE_ENV=development node index.js.

Globalization and localization

As an application grows in popularity it’s developers inevitably need to support different regions. At PayPal we support 193 countries across the globe.

Applications created via generator-kraken have built-in support for externalized content when using dust templates. This content is stored in it’s own file using key/value pairs. We opted to not use JSON or any other complex format and instead opted for a simpler data structure which was easy enough to be hand edited if needed, but powerful enough to support the flexibility we needed.

Each template has an implicit binding to a content file of the same path and will automatically resolve strings within them. In other words, if you have templates/account/user.dust then content will be merged from locales/DE/de/account/user.properties for German users. This removes the hassle of needing to manually wire up your content source.

Content example

Shortly, we’ll also release support for template specialization when using dust in Kraken. Experiences often need to deviate based on locale, but also for AB tests and device types. It’s subpar to have this logic cluttering your code, and specialization solves this.

Application security

Security is important to us and, while there are a good amount of best practices available for web applications, most are typically not enabled by default. Kraken enables these for you and uses configuration to set up smart defaults. A few of the more useful ones to call out are:

  • CSRF – Cross-site request forgery support is enabled by default. A token will be added to your session and if the user is going to perform any data changing method, e.g. POST, PUT, DELETE, then the template must return the token value. This protects against malicious websites changing data on your user’s behalf.
  • XFRAMES – Using an HTML frame element to frame another website and trick users into performing actions they did not intend is called click-jacking. XFRAMES headers protect against this by restricting who can frame the web application. By default this is set to SAMEORIGIN, which means only you can frame your website.
  • CSP – Content Security Policy enables to you tell the browser what type of resources are allowed and enabled for your web application.

How open source has changed PayPal

Kraken was the first major release for PayPal into the open source world and has been hugely successful in changing the way we think about software. In a way, it helped paved the way for us to hire Danese Cooper as our first head of open source at PayPal! We have historically been a company who kept to themselves and thus a lot of code which may have been useful to the community was instead developed in a proprietary manner.

Kraken was built to be the opposite of this: it is publicly available. This allowed us keep out what I consider PayPal’isms – the secret sauce specific to PayPal – and to give back to the node community and fill any gaps for others benefit.

The node community itself has been very welcoming and we’ve seen both great interest in our adoption of node and multiple external contributions the kraken codebase. This has been inspiring and has definitely solidified that we made the right choice in going open source.

Try it out

If you’re interested in trying out kraken, head on over to krakenjs.com where you can find instructions and sample code to get you on your way. You can find other open source offerings from PayPal at paypal.github.io.

If any of this sounds interesting come work for us!

Node.js at PayPal

By

There’s been a lot of talk on PayPal moving to node.js for an application platform. As a continuation from part 1 of Set My UI Free, I’m happy to say that the rumors are true and our web applications are moving away from Java and onto JavaScript and node.js.

Historically, our engineering teams have been segmented into those who code for the browser (using HTML, CSS and JavaScript) and those who code for the application layer (using Java). Imagine an HTML developer who has to ask a Java developer to link together page “A” and “B”. That’s where we were. This model has fallen behind with the introduction of full-stack engineers, those capable of creating an awesome user interface and then building the application backing it. Call them unicorns, but that’s what we want and the primary blocker at PayPal has always been the artificial boundary we established between the browser and server.

Node.js helps us solve this by enabling both the browser and server applications to be written in JavaScript. It unifies our engineering specialties into one team which allows us to understand and react to our users’ needs at any level in the technology stack.

Early adoption

Like many others, we slipped node.js in the door as a prototyping platform. Also like many others, it proved extremely proficient and we decided to give it a go on production.

Our initial attempts used express for routing, nconf for configuration, and grunt for build tasks. We especially liked the ubiquity of express, but found it didn’t scale well in multiple development teams. Express is non-prescriptive and allows you to set up a server in whatever way you see fit. This is great for flexibility, but bad for consistency in large teams. Over time we saw patterns emerge as more teams picked up node.js and turned those into Kraken.js; it’s not a framework in itself, but a convention layer on top of express that allows it to scale to larger development organizations. We wanted our engineers to focus on building their applications and not just focus on setting up their environments.

We’ve been using kraken.js internally for many months now (we’ll be open sourcing it soon!) and our engineering teams are eager to finally bring the internal node.js applications we’ve built live.

Bringing node.js to production

Our first adopter of node.js in production wasn’t a minor application; it was our account overview page and one of the most trafficked apps on the website. We decided to go big, but we also mitigated that risk by building the equivalent Java application in parallel. We knew how to deploy and scale Java applications, so if anything went wrong with the node.js app, we could fall back to the Java one. This provided the setting for some interesting data.

Development

We started in January and it took us a few months to get the necessary infrastructure in place for node.js to work in PayPal, e.g. sessions, centralized logging, keystores. During this time we had five engineers working on the Java application. Two months in to the Java development, two engineers started working on the parallel node.js app. In early June they met at a crossroads, the applications had the same set of functionality; the node.js application, a smaller team with a two month delayed start, had quickly caught up. A few details stood out after we ran the test cases and both applications passed the same functional tests. The node.js app was:

  • Built almost twice as fast with fewer people
  • Written in 33% fewer lines of code
  • Constructed with 40% fewer files

This provided encouraging evidence to show that our teams could move faster with JavaScript. We were sold and made the decision to put the Java app on hold while we doubled down on the JavaScript one. The great news is that the Java engineers on the project, unsure about node.js in the beginning, delightfully moved over to node.js and are happily committing to a parallel work stream, providing us with double the productivity we were originally seeing.

Performance

Performance is a fun and debatable topic. In our case, we had two applications with the exact same functionality and built by roughly the same teams: one on our internal Java framework based on Spring and the other built on kraken.js using express, dust.js and other open source code. The application contained three routes and each route made a between two and five API requests, orchestrated the data and rendered the page using Dust.

We ran our test suite using production hardware that tested the routes and collected data on throughput and response time.

Node.js vs Java performance graph

You can see that the node.js application had:

  • Double the requests per second vs. the Java application. This is even more interesting because our initial performance results were using a single core for the node.js application compared to five cores in Java. We expect to increase this divide further.
  • 35% decrease in the average response time for the same page. This resulted in the pages being served 200ms faster— something users will definitely notice.

There’s a disclaimer attached to this data: this is with our frameworks and two of our applications. It’s just about as apples-to-apples a performance test as we could get between technologies, but your milage may vary. That said, we’re excited by what we’ve seen from node.js’ raw performance.

Future

All of our consumer facing web applications going forward will be built on node.js. Some, like our developer portal, are already live while others, like account overview, are in beta There are over a dozen apps already in some stage of this migration and we will continue to share data as more applications go live. This is an exciting time for engineering at PayPal!

If any of this sounds interesting come work for us!

Dust JavaScript templating, open source and more

By

A year ago we set out on an effort to revitalize PayPal’s user interface. As you may know, there are quite a few pages on our site with designs that date back years and it’s about time we got rid of them! To accomplish this, a large part of what we had to overcome was technical debt, and we needed a technology stack free of this which would enable greater product agility and innovation. With this in mind we set out to “free our UI bits”.

At the time, our UI applications were based on Java and JSP using a proprietary solution that was rigid, tightly coupled and hard to move fast in. Our teams didn’t find it complimentary to our Lean UX development model and couldn’t move fast in it so they would build their prototypes in a scripting language, test them with users, and then later port the code over to our production stack.

There are obvious issues with this approach, so we decided it was time to change things and came up with a few requirements:

  1. The templating must be decoupled from the underlying server technology and allow us to evolve our UIs independent of the application language
  2. It must be open source
  3. It must support multiple environments

Decoupling our templates

Templating was the cornerstone piece, but we didn’t need to dwell on it too much. Everyone is familiar with JavaScript templating nowadays and it has huge benefits with empowering both client and server side rendering, so it seemed like a given. Dust, Mustache, Handlebars: they all have similar approaches to JavaScript templating, each with their own pros and cons. In the end, we opted to partner with LinkedIn and go with Dust templates and have been happy to contribute back to the project where we can.

As it turned out, Dust was so successful and well received in the company that before we finished integrating into our framework we found ourselves with half a dozen teams using it. We found JavaScript templating to be a great compliment to a Lean UX iteration cycle and have been impressed with the speed that our teams can turn design ideas into code. Along the way we opted for a few other enhancements to our templating, including leveraging Bootstrap for initial concepts and using Bower to manage our components.

By the time Dust was officially released, all of our products were using it in place of JSP.

Open source all the things

We made a conscious decision to fully embrace open source in our UI. It sounds common sense, but enterprise-scale companies often have a habit of creating proprietary solutions — at least we’ve made this mistake in the past — and keeping them propriatary. There are three major downsides with this approach:

  1. Proprietary solutions create an artificial ramp up time for new employees to become productive
  2. It forces a portion of your team to focus their energy on building your solution rather than your product
  3. These solutions are rarely better than the ones available in the open source community

Through a bit of exploration the rest of our UI stack started to fall into place:

  • LESS was the CSS pre-processor we chose since it aligned well with our JavaScript stack and worked with Bootstrap
  • RequireJS was chosen as our JavaScript module loader due to it’s clear and descriptive nature
  • Backbone.js worked out as our client-side MV* framework
  • Grunt was being used for our UI build tasks, because, well, Grunt is just awesome
  • Mocha was starting to be explored for enabling CI driven workflows for our UIs

Multiple environments

One requirement we had was that the templating solution must support multiple environments. An interesting fact of being an internet company for more than a few years is that you tend to end up with multiple technology stacks. In our case, we have legacy C++/XSL and Java/JSP stacks, and we didn’t want to leave these UIs behind as we continued to move forward. JavaScript templates are ideal for this. On the C++ stack, we built a library that used V8 to perform Dust renders natively – this was amazingly fast! On the Java side, we integrated Dust using a Spring ViewResolver coupled with Rhino to render the views.

Node.js was also introduced into the company around this time. Since our teams already knew JavaScript the pure development speed we could achieve with node was highly desirable to our product agility. Plus, integrating Dust was trivial since they are both JavaScript.

Finally, let’s not forget the browser as a rendering environment. Dust allows us to easily perform client side rendering. This brings up an interesting point though: Rather than saying we’re going to render all pages in the browser because we can, we’re opting to do what makes sense for both your browser and our engineers. Client side rendering is great, but not always needed. You’ll notice on the new PayPal pages that the first page is always rendered on the server to bootstrap the app, but from there it’s up to the UI engineers to decide it they need client side rendering or not. We like this flexibility.

Live Dust apps

So, all of this is good, but where are we using Dust and the new technologies today? Well, it’s is actually live on a good portion of our website that’s been redesigned. This includes our:

  • Home pages
  • Checkout (currently being A/B tested)
  • New user signup
  • A lot more on the way!

What’s next

Our UI engineers are now happy. They can iterate quickly on production-style apps with their design counterparts and get those ideas out of their heads and in front of customers. We’re now starting to see a build, test, learn culture form, which is a long way from the waterfall model we traditionally had with our design partners, and hopefully our users start to notice as more of the website gets updated.

Changing the front end technologies at PayPal was a huge success, but we’re now looking to make another step. Node.js started to organically spread through the company as a part of these efforts and we will take a look at that in Part 2.

Finally, we’re hiring, so if any of this sounds interesting come work for us!