Monthly Archives: February 2014

Node.js Application Engineers Wanted!


PayPal made the decision last year to write all of our new applications on our node.js Kraken stack. Since we are also redesigning almost all of our experiences this means there are lots of opportunities for Node developers (or aspiring Node developers).

Web Application Engineering

When I joined PayPal I formed the User Interface Engineering (UIE) team and about a year into the job I centralized this organization. This allowed me to strengthen the discipline and simplify our technology stack. By enabling applications to be built in node.js on the server side and DustJS on the front-end we basically made JavaScript our lingua franca for application development. And in fact, we are removing the label “UIE” and instead forming application engineering teams that can work on the complete application built on top of our new PayPal as a Service RESTful Platform. Application Engineering teams can quickly iterate on our experiences and realize our vision to “Bring Design to Life”.

If you are the kind of engineer who is passionate about bringing great experiences to life, being a craftsman in engineering and love working in a collaborative environment with your product & design partners, then we want to talk to you.

Why Work at PayPal?

I have said elsewhere that the key to developer happiness is what you work on, who you work with and how you will do your work.

What can you work on?

At PayPal we have 143 million active accounts in 193 markets and 26 currencies around the world and we process more than 9 million payments every day.

Businesses are built upon and rely upon our services. Consumers find convenience in paying online & offline. And developers are finding it easier and easier to bring really cool payments solutions to consumers wherever they are. This means we have opportunities in business, consumer, payment and developer products.

This is impact at scale!

And in addition, PayPal is totally committed to open source. We use open source, release open source, and we work in an open source manner (we use Github internally). Our committment to open source recently got a lot bigger when we hired the Open Source “Diva”, Danese Cooper to head up Open Source at PayPal.

Your impact can be to the broader community as well. Now that is cool.

Specifically, if you join one of our Application Engineering teams you will able to work on any part of the application stack, from node.js on the server up to backbone.js on the client. We are part of the DustJS consortium with folks from LinkedIn, Yahoo!, Netflix, and other companies. And guess what? We aren’t married to any of these specific solutions and continue to evaluate the best approach to our solutions. Come help us figure it out!

We also have roles in our core Node team. If you are a Node afficiado then you will want to definitely talk with us as we have some exciting opportunities on the Kraken team.

Who can you work with?

We have been fortunate to assemble an awesome team of leading engineers that come from many of the top companies in the valley (and beyond). Take our work on KrakenJS, Bootstrap Accessibility Plugin, PayPal Beacon or our new secure one-swipe fingerprint payment system for the Samsung Galaxy S5 as examples of the engineering craftsmanship at PayPal.

Or look at the awesome companies that have joined the eBay/PayPal family. Companies like Milo,, Hunch, Braintree/Venmo, StackMob and to name just a few.

What this means is you will be working with smart people who will challenge you everyday.

How will you work?

We have made a major shift to Agile in the last year and now operate at a much nimbler manner. We practice Customer Driven Innovation at the customer ideation phase, Lean UX with our design partners and agile to roll our features.

Frankly, it is no secret that in the past PayPal had a lot of legacy technology and processes. We have been obliterating them the last few years and now have begun to operate in a manner that lets us roll out changes in real time in some of our teams and the rest will be operating like this in the near future. We belive in the vision of developers owning the code from creation to testing to deployment. Ultimate power in your hands.

We are constantly figuring out better ways to work and would love you to join our teams to help us make PayPal the best place to work in the valley.

Ok, so here is the scoop on being an Application Engineer at PayPal.


  • Write web application code following best practices of accessibility, internationalization, TDD.
  • Partner closely with design & product to craft great product experiences.
  • Deliver code in a dev ops environment.
  • Be a crafts(wo)man and encourage code craftsmanship across team.
  • Deliver code in an agile team environment.
  • Lead code reviews to drive teams to the highest standards for node.js apps & web apps.
  • Provide architectural leadership in product development team.
  • Drive teams to follow clean code principles.
  • Drive innovation through rapid prototyping and iterative development.


  • Experience developing node.js applications or a solid experience building applications on top of RESTful APIs
  • Solid knowledge of common client JavaScript technologies
  • Experience with JavaScript templating systems (Mustache, Handlebars, Dust, etc)
  • Comfortable with modern JavaScript architectures
  • You know the “good parts” of JavaScript
  • Solid knowledge of algorithms, design patterns, and componentization approaches
  • Experience with cross-browser, cross-platform, and design constraints on the web
  • Experience in software design patterns, problem solving and troubleshooting skills
  • Passion for engineering great experiences
  • Proven problem-solving and interpersonal communication skills.
  • Ability to operate effectively both independently as well and within a team.

Contact Us

Interested? You can contact us at

Inside the PayPal Mobile SDK 2.0


We’re pleased to announce the 2.0 release of the PayPal Mobile SDK. This major version update introduces important new capabilities as well as improvements under the hood. In this post we’ll highlight a few of the major changes, with some narrative about how we got here.

TL;DR? Check out the PayPal Developer Docs for an overview. In-depth docs and SDK downloads can be found in the PayPal iOS SDK on GitHub or PayPal Android SDK on GitHub.

SDK Focus

In writing the first version of the Mobile SDK last year, our team focused on four high-level values:

  1. Fast and delightful user experience making payments
  2. Simple, equally delightful developer experience taking payments
  3. Consistent, bulletproof quality across devices and platforms
  4. Opendirecttechnical communication between developers (you!) and us

To keep the bar high, we limited the first SDK release to only single payments, using PayPal or credit card (with scanning). While the developers looking for a basic drop-in payment library have found the first version helpful, we knew a complete native mobile payments solution needed to offer more. So, without losing focus on simplicity, quality, and community, we’ve written the 2.0 Mobile SDK to enable seamless commerce in modern, sophisticated mobile commerce and marketplace apps.

Future Payments

The marquee feature in 2.0 is Future Payments. With Future Payments, developers can gather payment information from a customer just once, and use it to process all future payments from that customer. If you’ve added PayPal as a payment option in Uber, then you’ve used it. Uber piloted this feature with us to provide their customers with a PayPal payment option that’s fast and simple. With 2.0, we’re opening up Future Payments to all mobile developers.

So how does it work? To identify who should get paid, you give the SDK a public Oauth2 client_id as input. The SDK then authenticates a user and asks for payment authorization; it returns a short-lived OAuth2 authorization code. Your app sends that authorization code to your servers. Your servers use this code to request OAuth2 access and refresh tokens with the future_payments scope. You can use the access token right away to charge the user, and store the refresh token for the future. Any time that you need to make payment API requests for this user in the future, you can exchange that refresh token for an access token, and you’re off to the races — no need to bother the user again.

Phew! It sounds more complicated than it actually is, and the SDK includes step-by-step integration guides:

Auth & Capture

One frequently requested feature is the option to decouple authorization of a payment from capture at a later time. This is a standard capability offered by credit card gateways. It is useful, for example, when shipping physical goods – you can authorize your customer’s payment when the order is placed, then later capture the funds only when the goods actually ship.

The 2.0 SDK uses new, flexible PayPal REST payment APIs – you can create a payment with either a sale or authorization intent. Thanks to the improved APIs, our team was able to add Auth/Capture support with only minor code changes. And, notably, the intent setting works identically for PayPal and credit card payments.

REST and OAuth 2.0

As PayPal continues to optimize mobile-first platform offerings, the PayPal Mobile SDK team works closely with counterparts in the API teams to set the roadmap and nail the integration with the OAuth2 Identity services and REST Payments API.

The 2.0 SDK now uses only the new PayPal APIs. Migrating to these APIs was critical to making the 2.0 SDK a reality. Now that we’re on the new stack, the SDK is well positioned to gain new capabilities in pace with the platform. We look forward to adding further improvements!

Check it Out

Learn more about the 2.0 PayPal Mobile SDKs by checking out the PayPal Developer Docs. When you’re ready to take a look at the APIs, sample code, README, and integration docs, head on over to GitHub:

We’d love to hear your integration experience and feedback.

About the PayPal SDK Team

We’re a distributed team (Portland, Austin, San Jose) that creates great tools for developers and delightful experiences for their users. When not releasing sweet PayPal SDKs, we can be found variously drinking copious amounts of coffee, writing science fiction, biking long distances, snuggling with a Ziggy, or driving down the California coast. Want to get in touch or see what we’ve worked on? Find us on GitHub: DaveBrentTomMattJeffJuwonAviJosh, AbhijeetMikeAvijit, Roman, and Bunty.

Hello Newman: A REST Client for Scala


Hi everyone, I’m Aaron. I came to PayPal from the StackMob team recently and while I was at StackMob, I co-created the Newman project. Our original goal was to unify high quality HTTP clients on the JVM together into a single, idiomatic Scala interface. I believe we’ve accomplished that, and we’ve moved on to higher level features to make it a great choice to talk to a RESTful service.

I recently gave a Newman talk at the Scala Bay Meetup in Mountain View, CA. A big thanks to everyone who came. I really appreciated all the great questions and feedback!

For those who missed my talk, I’ll give a recap here as well as describe Newman in more detail, and talk about some future plans. You can also check out all code sample and slides from the talk at

Background & Motivation

At StackMob, we ran a service oriented architecture to power our products. To build out that architecture we ran a distributed system composed of many services inside our firewall.

Every service in the system used HTTP for transport and JSON for serialization to communicate over the wire. The challenge we faced was this: How do we easily and flexibly send and receive JSON data over HTTP in Scala? We had the same challenge for building servers and clients.

When we began investigating the existing pool of HTTP clients, we turned to the massive JVM community for high quality clients that could handle our needs. We found a lot of them! I’ll highlight two clients with which we gained significant experience.

Apache HttpClient

When we looked at the Apache foundation, we found the HttpClient project. As expected, we found HttpClient to be very high quality. We used this library for a lot of our initial work, but we found a usability problem – it took too much code to do a simple request. The below code shows setup and execution logic for a GET request:

 * set up a connection manager and client.
 * you'd normally only do this once in your module or project.
val connManager: ClientConnectionManager = {
  val cm = new PoolingClientConnectionManager()
val httpClient: AbstractHttpClient = {
  val client = new DefaultHttpClient(connManager)
  val httpParams = client.getParams
  HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeout)
  HttpConnectionParams.setSoTimeout(httpParams, socketTimeout)

 * now make the actual GET request
val req = new HttpGet
val url = new URL("”)
val headers: List[(String, String)] = ???
headers.foreach { tup: (String, String) =>
  if(!headerName.equalsIgnoreCase("Content-Type")) req.addHeader(tup._1, tup._2)
val body: Array[Byte] = Array(‘a’.toByte, ‘b’.toByte, ‘c’.toByte)
//oops, sending a request body with a GET request doesn't make sense
req.setEntity(new ByteArrayEntity(body)) 
val resp = httpClient.execute(req)

Twitter Finagle

Finagle is Twitter’s core library for building distributed systems. The company has built almost all of their distributed systems infrastructure on top of this library. Furthermore, it represents a major abstraction that one of its creators has called services. See this paper for more.

Finagle is built atop the Netty project, so we expected Finagle to handle high concurrency workloads, which was important in many of our use cases. Also, we had used Netty directly to build some of our servers and found it’s stable and has a good community. With Finagle we found a similar pattern. For more on Finagle and Netty at Twitter, check out the recent Twitter blog posts.

Building HTTP clients with Finagle required less overall code than with the Apache library, but is still somewhat involved. The following setup an execution code for the same GET request as above:

//Set up the client. It's bound to one host.
host = ""
val url = new URL(host)
val client = ClientBuilder()
  .hosts(host) //there are more params you can set here

//Execute the request.
//Make sure the request is going to the same host
//as the client is bound to
val headers: Map[String, String] = ???
val method: Method = new HttpGet()
//this is an org.jboss.netty.buffer.ChannelBuffer
val channelBuf: ChannelBuffer = ??? 
val req = RequestBuilder()
  //oops, sending a request body with a GET request doesn't make sense
  .build(method, Some(channelBuf))
val respFuture: Future[HttpResponse] = client.apply(req)

respFuture.ensure {
  client.close() //don’t forget!

In Summary

In our search, we looked at other libraries as well, but found common patterns with all of them:

  1. HTTP libraries on the JVM tend to be very stable and well tested, or built atop very stable and well tested core libraries.
  2. You usually have to write setup and cleanup code.
  3. It usually takes at least 5 lines of code to execute a request.
  4. The plain Java libraries (obviously) require you to write non-idiomatic Scala.

Overall, the libraries we found required us to remember a lot of code, common patterns and sometimes implementation details. With so much to remember, we decided to either commit to a single library or write a wrapper around each that we wanted to use.

In Comes Newman

Newman started as an informal wrapper around Apache HttpClient. As our overall codebase grew and evolved, we needed to use new clients and knew we needed to formalize our original wrapper into a stable interface to wrap all the messy details of each implementation.

We began with the core interface and two implementations: ApacheHttpClient and FinagleHttpClient. After we deployed code using our first Newman clients, we found more benefits to the core abstraction:

  1. Safety – We iterated on the interface and used Scala’s powerful type system to enforce various rules of HTTP and REST. We’re now at a point where our users can’t compile code that attempts to execute various types of invalid HTTP requests.
  2. Performance – Behind the interface, we added various levels of caching and experimented with connection pooling mechanisms, timeouts, and more to extract the best performance from Newman based on our workloads. We didn’t have to change any code on the other side of the abstraction.
  3. Concurrency – Regardless of the underlying implementation, executing a request returns standard Scala Futures that contain the response. This pattern helps ensure that code doesn’t block on downstream services. It also ensures we can interoperate with other Scala frameworks like Akka or Spray. The Scala community has a lot of great literature on Futures, so I’ll defer to those resources instead of repeating things. The Reactive Manifesto begins to explain some reasoning behind Futures (and more!) and the standard Scala documentation on Futures shows some usage patterns.
  4. Extensibility – Our environments and workloads change, so our clients must also. To effect the change we need, we just need to switch clients with one line of code. We also made the core client interface in Newman very easy to extend, so we can implement a new client quickly and have more time to focus on getting the performance correct.

Higher Level Features

We had our basic architecture figured out and tested, and it looks like this:


A few notes about this architecture:

  • HttpClient is heavy – it handles various caching tasks, complex concurrency tasks (running event loops and maintaining thread pools, for example), and talking to the network.
  • HttpClient creates HttpRequests – each HttpRequest is very small and light. It contains a pointer back to the client that created it, so it’s common to have many requests for one client.
  • HttpRequest creates Future[HttpResponse] – the Future[HttpResponse] is tied to the HttpClient that is executing the request. That Future will be completed when the response comes back into the client.

With this architecture, we had proven to ourselves in production that we had a consistent, safe and performant HTTP client library. Our ongoing task now is to build features that make building and running systems easier for everyone who uses Newman. Here are a few higher level features that Newman has now:

  • Caching – Newman has an extensible caching mechanism that plugs into its clients. You define your caching strategy (when to cache) and backend (how and where to store cached data) by implementing interfaces. You can then plug them in to a caching HttpClient as necessary. Also, with this extensible caching system, it’s possible to build cache hierarchies. We’ve so far built an ETag and a simple read-through caching strategy and an in-memory caching backend. All ship with Newman.
  • JSON – As I mentioned at the beginning of this post, we use JSON extensively as our data serialization format over the wire, so we built it into Newman as a first class feature. Newman enables full serialization and deserialization to/from any type. Since JSON operations are built into the request and response interfaces, all client implementations get JSON functionality “for free.”
  • DSL – We built a domain specific language into Newman that makes even complex requests possible to create and execute in one line of code. The same goes for reading, deserializing, and decoding for handling responses. The DSL is standard Scala and provides more type safety on top of core Newman. Newman DSL code has become canonical.

The Result

Newman abstracts away the basics of RPC. For example, we were able to replace 10+ lines of code with the following (excluding imports and comments in both cases):

implicit val client = new ApacheHttpClient() //or swap out for another
GET(url(http, "")).addHeaders("hello" -> "readers").apply

This code has more safety features than what it replaced in most cases and the setup and teardown complexities are written once and encapsulated inside the client. We have been pleased with Newman so far and anticipate that next steps will make Newman more powerful and useful for everyone.

The Future

We have a long list of plans for Newman. Our roadmap is open on GitHub at The code is also open source, licensed under Apache 2.0. Read the code, file issues, request features, and submit pull requests at

Finally, if similar distributed systems work excites you, we build very large scale, high availability distributed systems and we’re hiring. If you’re interested, send me a GitHub, Twitter or LinkedIn message. Regardless, happy coding.

– Aaron Schlesinger –



The Dust Bowl


This dust bowl post is not about yet another American football bowl game. Nor is it about what might be coming to pass for those of us in California with no rain. Rather, it is a recap of the first gathering of dust.js contributors and users. The goal of the meeting was to set directions and schedules around the evolution of the dust templating engine (

The companies represented at this first Dust Bowl were: Benefitfocus, eBay, LinkedIn, PayPal, StubHub, and Yahoo.

The meeting kicked off with each company providing a summary of how they are using dust. Then we moved on to the main agenda for the day:

  • New build mechanisms for the dust GitHub project
  • Quarterly release plans for 2014
  • Directions for Dust.js helpers
  •  Making extension/hooking of dust easier for frameworks
  • Security-related issues
  • Internationalization and globalized object formatting (dates, money, etc.)
  • Review of major open issues to determine concensus and direction
  • Futures: promoting dust, meetups, blogs, expand awareness, future dust bowls

What the future holds

Like kids eager to open their presents, I’m guessing you want to see what new goodies are in the works so let’s go there straight away. Some of the items are exploratory and some are quite concrete. The goal is to have a release per quarter in 2014.

Q1 2014

  • Establish collaborative mechanisms for contributors in the intervals between dust bowls. Ideal features: Mobile friendly, milestones, tasks, discussion board, notifications
  • Evaluate options for making 3rd party dust helpers easier for the community to find and load
  • Size of dustjs-helpers library is a factor when adding new helpers. Explore ways to let someone build a library with just the ones they need.

Features for Q1 2014

  • A solution for retaining whitespace in templates for HTML/JS (Issue #238)
  • Lower some error messages to warnings
  • Evaluate helpers for inclusion in core set (See section below)
  • Parser syntax errors should be passed to callback (Issue #107)
  • Unspecified parameter results in null value (Issue #252)
  • Support template that contains JavaScript // comments (Issue #300) – part of whitespace work.
  • Partial parameters lower on stack than context (Issue #313). Reverse a code change that is no longer needed after the changes around on how paths work.
  • Add CommonJS support (Issue #325)

If you only ever add features to a language, it becomes bloated.  Some things turn out to be bad decisions in hindsight or of little value.  As part of a New Year slimming down, we discussed deprecation of some language features. The new warning/error logging capability added in release 2.1.0 would provide a notification mechanism for deprecation. The features we might want to put “on notice” for future removal are:

  • Manual contexts on sections and partials {>partial:context}
  • JavaScript functions in the JSON context (use helpers)

As part of the Q1 cleanup, we will review old GitHub issues with an eye to closing ones that are unlikely to ever get done and consolidating others into a single new issue that covers several related topics.

Q2 2014

Q2 will be a performance enhancement-oriented release. We will go back and compare current performance with our baseline, determine where slowdowns arose, and determine ways to make things faster. General performance profiling will further inform areas of performance improvement work.

As part of the performance work, we will add build automation so performance changes are measured at each commit. Examining compiled code patterns help us understand if compiler changes can improve runtime performance and minification of generated code.

Organizationally, we proposed Dust Bowl II for around the end of the second quarter.

Features for Q2 2014

  • Enhance the 4-panel playground for users. Make it 3 panels by hiding generated code but with option to show it. Provide a way to support user partials and helpers in the playground.
  • Add tests to better cover internal private functions.
  • Improve testing environment to make it easier to test complex templates and contexts.
  • Domain has been registered. Build out a pre-release user portal to “all things dust” at this URL.
  • Evaluate allowing single quotes for parameters in sections and partials (currently only double quotes are allowed)
  • Make external APIs more easily hookable for extension.
  • Extend onLoad to allow passing context.
  • Allow user-provided alternatives to the dust cache.
  • Allow alternative logger to be hooked into the dust logging.
  • Look at async vs sync behaviors that exist today and possible ways to control which is used.

Q3 2014

Official launch of to target new and advanced users. Site would include::

  • Blog
  • Upcoming releases
  • Documentation
  • Simplified Sandbox playground for trying out dust
  • Examples ranging from simple to advanced
  • Content Restructure

Features for Q3 2014 

  • More robust XSS filtering library (LinkedIn and PayPal have these)
  • Strong emphasis on XSS risks of using current @if due to it using eval. Deprecate it and provide safer alternatives (@or, @if interpretive version)
  • LinkedIn and PayPal have security scanners for dust templates. Consider open-sourcing. See also for a general dust lint-like scanner
  • Extend filter implementation to give users the ability to write more powerful filters. Step 1: Make context available to filters which opens up a decent amount of extra functionality for things like double evaluation of braces  Step 2: Extend the compiler to allow a filter with parameters like: {name | filter(param1, param2, …) }.
  • Explore compiler enhancement for static final blocks in template to capture complex text blocks for multiple reuse
  • Begin regional meetups to promote dust

Q4 2014

  • Look at more “way out there” directions like HtmlBars (
  • Look at status of Web Components work and how it might integrate with dust.

So that’s the best guess currently as to how releases look for the upcoming year.  Let’s look at a few other topics that were discussed during the day.

Directions for Dust.js helpers (some candidates for general utility inclusion)

  • @provide:
  • @if: interpretive version –
  • @iterate: Issue #48 in DustHelpers repo. See also

Localization/Internationalization (No release identified)

  • PayPal kraken-js open source has an internationalizaton solution for messages
  • LinkedIn @format tag tackles a number of issues like formatting numbers, currency, date/times, and more
  • Look at intl.js polyfill for future language extension areas
  • Look at which has some L10N things

GitHub Releases and Issues

  • Consider using new GitHub Releases mechanism. Generally favorable opinion on doing this.
  • Look at (Trello-style for GitHub issues) to manage issues.
  • Consider to create GitHub issue bot to help manage the backlog/new submittals.


We wrapped up the day by discussing how we can spread knowledge and interest in dust. Ideas included:

  • Blogging – your reading one right now
  • Be active on StackOverflow area for dust
  • A standard introductory presentation that can be given at local meetups
  • Contact with local college and universities to promote JavaScript, Node and dust. Possible seminar presentations.
  • Meet online quarterly and in person twice a year.

At the end of the meeting, all of us were pleased at the plans made for dust. Hopefully, the community will find some useful new features in the plans. If you want to join the action as a contributor, head right over to or for the helpers — check out the issues and see if you spot something you are interested in contributing to. Then fork a copy, do your thing and send us a pull request.

– Richard Ragan: @rrragan,

PayPal hosts BayJax meetup dedicated to JavaScript testing


On November 20, 2013, the BayJax meetup group convened at PayPal HQ to talk about JavaScript testing. Four presenters, including myself, covered various facets of JavaScript testing. While the driving rain of that evening dampened attendance somewhat (there were a couple left over pizzas and some empty chairs), the high quality videos linked below can serve to widen the audience for this informative event.

Vojta Jina on Karma: Part 1 and Part 2

Vojta Jina discussed the Karma testing framework. In addition to running the exhaustive suite of tests for AngularJS, Karma is a standout JS unit testing framework in its own right. Enjoy:

Reid Burke on Testing Web Apps: Part 1Part 2 and Part 3

Setting up a continuous integration environment to run 100K+ tests on 12 different browsers is hard, but necessary. Managing the feedback from such a system is also hard, but necessary. Reid shares his valuable perspective based on meeting this challenge for the YUI team; using YetiSauceLabs, Jenkins and parallelization.

Santiago Suarez Ordoñez: Part 1 and Part 2

Santiago, a SauceLabs Senior Infrastructure Developer and Selenium committer, looks at the history of JavaScript testing and also towards its future. He discusses the origin of Selenium, Selenium2, how the Selenium2 API is becoming a W3C spec, and how the web driver codebase will then shrink to basically nothing. He also discusses what he sees as the ideal approach to automated testing for small teams. Finally he outlined a very cool cross-browser JavaScript testing API which SauceLabs is currently developing.

Matt Edelman on Nemo: Part 1Part 2 and Part 3

PayPal has adopted NodeJS for its web applications (please see krakenjs). We wanted to write our browser automation tests in JavaScript and execute them from within the application they are testing. While there were a couple different webdriver APIs available (selenium-webdriver and wd) in npm, there wasn’t an available framework which provided the flexible configuration needed to run the same tests across multiple environments and devices. I wrote the Nemo module to satisfy these requirements, as well as to suggest (not enforce) a factorization which promotes maintainable, readable, and reusable code.

As you can see in the diagram below, the system uses JSON wire protocol, which can communicate with numerous drivers (selenium grid, ghostdriverios-driverAppium, SauceLabs, etc.) in order to automate just about any browser or even native mobile applications:


We are using Nemo along with Grunt (see grunt task grunt-loop-mocha) and Mocha to write and run automated tests throughout the entire development lifecycle: development, QA, and continuous integration. Look for Nemo to be open source very soon!

One notable change since this talk occurred is that you can now parallelize Nemo tests in order to complete test suites more quickly.

Happy Testing!