Tag Archives: Open Source

Maintaining JavaScript Code Quality with ESLint

By

As a lead UI engineer on the consumer web team at PayPal I’ve often seen patterns of mistakes that repeated themselves over and over again. In order to put an end to the most egregious errors we started using JSHint early on in the project. Despite its usefulness in catching major syntax errors, it did not know anything about our code, our patterns or our projects. In trying to improve code quality across the whole consumer web team we needed to find a linting tool that let us teach it how we wanted to code.

Enter ESLint

ESLint was started in 2013 by Nicholas Zakas with the goal of building a customizable linting tool for JavaScript. With that goal in mind Nicholas decided to make each rule standalone and provided a mechanism to easily add additional rules to the system.

Here is a view of the ESLint directory structure:

Screen Shot 2014-12-11 at 9.51.01 AM

ESLint uses the Esprima parser to convert the source code it is linting into an abstract syntax tree. That tree is passed into each of the rules for further analysis. When a violation is found it is reported back up to ESLint and then displayed.

Understanding Abstract Syntax Trees

An abstract syntax tree (AST) is a data-structure that represents the meaning of your code.

Let’s use this simple statement as an example:

Screen Shot 2014-12-11 at 6.52.00 PM

That can be represented by the following syntax tree:

Screen Shot 2014-12-11 at 6.53.34 PM

When Esprima generates an AST from a piece code it returns an object. That object includes information not found in the original source, such as the node type, which proves useful later in the linting process.

Here is the Esprima generated AST for our earlier example:

Screen Shot 2014-12-11 at 9.54.54 AM

One of the most important pieces of information found here is the node type. In this example there are three types of nodes: VariableDeclarationVariableDeclarator and BinaryExpression. These types and other meta data generated by the parser help programs understand what is happening in the code. We’ll take advantage of that information as we learn to write custom rules for ESLint.

Building A Custom Rule

Here’s a simple example of a rule we use internally to prevent overriding a property we rely on in our express routes. Overriding this property led to several bugs in our code, so it was a great candidate for a custom rule.

Screen Shot 2014-12-11 at 9.58.56 AM

As you can see our rule is returning an object which takes as its key the name of the AST node we want to inspect. In our case we’re looking for nodes of type AssignmentExpression. We want to know when a variable is being assigned. As ESLint traverses the AST if it finds an AssignmentExpression it will pass that node into our function for further inspection.

Within our function we’re checking to see if that expression is happening as part of a MemberExpression. A MemberExpression occurs when we’re assigning a value to a property on an object. If that’s the case we explicitly check for the name of the object and the property and then call context.report() to notify ESLint when there has been a violation.

This is a simple example of the power and ease in which custom rules can be built using ESLint. More information about building custom rules can be found on the Working with Rules section of the ESLint home page.

Packaging Custom Lint Rules

ESLint allows you to reference custom rules in a local directory using the –rulesdir flag. You simply tell ESLint where to look for the rule and enable it in your configuration file (using the filename as the key). This works well if the rules are relevant to a single project, however at PayPal we have many teams and many projects which can benefit from our custom rules. Our preferred method is to bundle rules together as an ESLint plugin an install them with NPM.

To create an ESLint plugin you need to create a module which exports two properties: rules and rulesConfig. In our case we’re using the requireindex module to create the rules object based on the contents of our rules/ folder. In either case the key should match the name of the rule and the value should be the rule itself. The rulesConfig property, on the other hand, allows you to define the default severity for each of those rules (1 being a warning, 2 for an error). Any module defined in the node_modules folder with the name eslint-plugin-*  will be made accessible to ESLint.

Here is what our internal plugin looks like:

Screen Shot 2014-12-11 at 10.24.04 AM

If you search for “eslint-plugin” on the npm website there are many plugins provided by the community at large. These shared modules help us identify best practices and catch potential sources of bugs and can be brought in to any any of our projects.

Conclusion

We love ESLint at PayPal. We’ve built it into our continuous integration pipeline and have it integrated with our IDEs. We’re glad to see a vibrant community growing up around ESLint and hope to give back more in the future. As our own Douglas Crockford is fond of saying, “Don’t make bugs!” That’s pretty hard to do, but as least linting helps us catch many of them before they can do much harm.

Open sourcing kraken.js

By

kraken-logoIt wasn’t far into our move to node.js that we began to notice an opportunity to contribute back to the community. There were plenty of web application frameworks out there, but items like localization, country adaption, security, and scalability for large development teams were largely missing. We deal with money, and we do it in 193 markets covering 80 languages and 26 different currencies. That’s a lot of complexity and requires multiple teams to develop. Kraken was created to make this process easier.

What kraken offers

Kraken uses the popular express web application framework as a base platform and adds environment-aware and dynamic configuration, advanced middleware capabilities, application security, and lifecycle events. These features make Kraken ideal for enterprise-size companies where consistency across teams is needed, but also useful for node.js beginners who want to focus on building their application and not the application’s framework.

Pre-configured, but customizable

All of the technologies you need to build a web application are pre-configured and stitched together for you by generator-kraken. Creating a new kraken app is as easy as running yo kraken and answering a few questions.

By default, this scaffolding includes dust for templates, LESS for CSS preprocessing, RequireJS for JavaScript modules, and Grunt for task handling. This is our recommended setup, but using different technologies is supported as well.

Configuration

If you’ve used express before you’ve probably written code to configure how your cookies are parsed, if you have a favicon or not, how you’re accepting multipart forms, etc. It’s extremely flexible, but that code can add complexity and, more importantly, if your applications are spread across teams they’re not guaranteed to be doing it the same way. Kraken solves this by moving this setup out of the code and into configuration files.

Configuration example

Application and middleware configuration is stored in JSON files, creating a consistent implementation and removing the need for tribal knowledge when configuring items, e.g. does bodyParser need to come before cookieParser or vice-versa?

These files are also environment-aware, so overriding values when you’re in development, debug, or test mode is easy. To override a value in config/app.json for development you would create config/app-development.json with the delta and then start your app using NODE_ENV=development node index.js.

Globalization and localization

As an application grows in popularity it’s developers inevitably need to support different regions. At PayPal we support 193 countries across the globe.

Applications created via generator-kraken have built-in support for externalized content when using dust templates. This content is stored in it’s own file using key/value pairs. We opted to not use JSON or any other complex format and instead opted for a simpler data structure which was easy enough to be hand edited if needed, but powerful enough to support the flexibility we needed.

Each template has an implicit binding to a content file of the same path and will automatically resolve strings within them. In other words, if you have templates/account/user.dust then content will be merged from locales/DE/de/account/user.properties for German users. This removes the hassle of needing to manually wire up your content source.

Content example

Shortly, we’ll also release support for template specialization when using dust in Kraken. Experiences often need to deviate based on locale, but also for AB tests and device types. It’s subpar to have this logic cluttering your code, and specialization solves this.

Application security

Security is important to us and, while there are a good amount of best practices available for web applications, most are typically not enabled by default. Kraken enables these for you and uses configuration to set up smart defaults. A few of the more useful ones to call out are:

  • CSRF – Cross-site request forgery support is enabled by default. A token will be added to your session and if the user is going to perform any data changing method, e.g. POST, PUT, DELETE, then the template must return the token value. This protects against malicious websites changing data on your user’s behalf.
  • XFRAMES – Using an HTML frame element to frame another website and trick users into performing actions they did not intend is called click-jacking. XFRAMES headers protect against this by restricting who can frame the web application. By default this is set to SAMEORIGIN, which means only you can frame your website.
  • CSP – Content Security Policy enables to you tell the browser what type of resources are allowed and enabled for your web application.

How open source has changed PayPal

Kraken was the first major release for PayPal into the open source world and has been hugely successful in changing the way we think about software. In a way, it helped paved the way for us to hire Danese Cooper as our first head of open source at PayPal! We have historically been a company who kept to themselves and thus a lot of code which may have been useful to the community was instead developed in a proprietary manner.

Kraken was built to be the opposite of this: it is publicly available. This allowed us keep out what I consider PayPal’isms – the secret sauce specific to PayPal – and to give back to the node community and fill any gaps for others benefit.

The node community itself has been very welcoming and we’ve seen both great interest in our adoption of node and multiple external contributions the kraken codebase. This has been inspiring and has definitely solidified that we made the right choice in going open source.

Try it out

If you’re interested in trying out kraken, head on over to krakenjs.com where you can find instructions and sample code to get you on your way. You can find other open source offerings from PayPal at paypal.github.io.

If any of this sounds interesting come work for us!

Hello Newman: A REST Client for Scala

By

Hi everyone, I’m Aaron. I came to PayPal from the StackMob team recently and while I was at StackMob, I co-created the Newman project. Our original goal was to unify high quality HTTP clients on the JVM together into a single, idiomatic Scala interface. I believe we’ve accomplished that, and we’ve moved on to higher level features to make it a great choice to talk to a RESTful service.

I recently gave a Newman talk at the Scala Bay Meetup in Mountain View, CA. A big thanks to everyone who came. I really appreciated all the great questions and feedback!

For those who missed my talk, I’ll give a recap here as well as describe Newman in more detail, and talk about some future plans. You can also check out all code sample and slides from the talk at https://github.com/arschles/newman-example.

Background & Motivation

At StackMob, we ran a service oriented architecture to power our products. To build out that architecture we ran a distributed system composed of many services inside our firewall.

Every service in the system used HTTP for transport and JSON for serialization to communicate over the wire. The challenge we faced was this: How do we easily and flexibly send and receive JSON data over HTTP in Scala? We had the same challenge for building servers and clients.

When we began investigating the existing pool of HTTP clients, we turned to the massive JVM community for high quality clients that could handle our needs. We found a lot of them! I’ll highlight two clients with which we gained significant experience.

Apache HttpClient

When we looked at the Apache foundation, we found the HttpClient project. As expected, we found HttpClient to be very high quality. We used this library for a lot of our initial work, but we found a usability problem – it took too much code to do a simple request. The below code shows setup and execution logic for a GET request:

/**
 * set up a connection manager and client.
 * you'd normally only do this once in your module or project.
 */
val connManager: ClientConnectionManager = {
  val cm = new PoolingClientConnectionManager()
  cm.setDefaultMaxPerRoute(maxConnectionsPerRoute)
  cm.setMaxTotal(maxTotalConnections)
  cm
}
val httpClient: AbstractHttpClient = {
  val client = new DefaultHttpClient(connManager)
  val httpParams = client.getParams
  HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeout)
  HttpConnectionParams.setSoTimeout(httpParams, socketTimeout)
  client
}

/**
 * now make the actual GET request
 */
val req = new HttpGet
val url = new URL("http://paypal.com”)
req.setURI(url.toURI)
val headers: List[(String, String)] = ???
headers.foreach { tup: (String, String) =>
  if(!headerName.equalsIgnoreCase("Content-Type")) req.addHeader(tup._1, tup._2)
}
val body: Array[Byte] = Array(‘a’.toByte, ‘b’.toByte, ‘c’.toByte)
//oops, sending a request body with a GET request doesn't make sense
req.setEntity(new ByteArrayEntity(body)) 
val resp = httpClient.execute(req)

Twitter Finagle

Finagle is Twitter’s core library for building distributed systems. The company has built almost all of their distributed systems infrastructure on top of this library. Furthermore, it represents a major abstraction that one of its creators has called services. See this paper for more.

Finagle is built atop the Netty project, so we expected Finagle to handle high concurrency workloads, which was important in many of our use cases. Also, we had used Netty directly to build some of our servers and found it’s stable and has a good community. With Finagle we found a similar pattern. For more on Finagle and Netty at Twitter, check out the recent Twitter blog posts.

Building HTTP clients with Finagle required less overall code than with the Apache library, but is still somewhat involved. The following setup an execution code for the same GET request as above:

//Set up the client. It's bound to one host.
host = "http://paypal.com/"
val url = new URL(host)
val client = ClientBuilder()
  .codec(Http())
  .hosts(host) //there are more params you can set here
  .build()

//Execute the request.
//Make sure the request is going to the same host
//as the client is bound to
val headers: Map[String, String] = ???
val method: Method = new HttpGet()
//this is an org.jboss.netty.buffer.ChannelBuffer
val channelBuf: ChannelBuffer = ??? 
val req = RequestBuilder()
  .url(url)
  .addHeaders(headers)
  //oops, sending a request body with a GET request doesn't make sense
  .build(method, Some(channelBuf))
val respFuture: Future[HttpResponse] = client.apply(req)

respFuture.ensure {
  client.close() //don’t forget!
}

In Summary

In our search, we looked at other libraries as well, but found common patterns with all of them:

  1. HTTP libraries on the JVM tend to be very stable and well tested, or built atop very stable and well tested core libraries.
  2. You usually have to write setup and cleanup code.
  3. It usually takes at least 5 lines of code to execute a request.
  4. The plain Java libraries (obviously) require you to write non-idiomatic Scala.

Overall, the libraries we found required us to remember a lot of code, common patterns and sometimes implementation details. With so much to remember, we decided to either commit to a single library or write a wrapper around each that we wanted to use.

In Comes Newman

Newman started as an informal wrapper around Apache HttpClient. As our overall codebase grew and evolved, we needed to use new clients and knew we needed to formalize our original wrapper into a stable interface to wrap all the messy details of each implementation.

We began with the core interface and two implementations: ApacheHttpClient and FinagleHttpClient. After we deployed code using our first Newman clients, we found more benefits to the core abstraction:

  1. Safety – We iterated on the interface and used Scala’s powerful type system to enforce various rules of HTTP and REST. We’re now at a point where our users can’t compile code that attempts to execute various types of invalid HTTP requests.
  2. Performance – Behind the interface, we added various levels of caching and experimented with connection pooling mechanisms, timeouts, and more to extract the best performance from Newman based on our workloads. We didn’t have to change any code on the other side of the abstraction.
  3. Concurrency – Regardless of the underlying implementation, executing a request returns standard Scala Futures that contain the response. This pattern helps ensure that code doesn’t block on downstream services. It also ensures we can interoperate with other Scala frameworks like Akka or Spray. The Scala community has a lot of great literature on Futures, so I’ll defer to those resources instead of repeating things. The Reactive Manifesto begins to explain some reasoning behind Futures (and more!) and the standard Scala documentation on Futures shows some usage patterns.
  4. Extensibility – Our environments and workloads change, so our clients must also. To effect the change we need, we just need to switch clients with one line of code. We also made the core client interface in Newman very easy to extend, so we can implement a new client quickly and have more time to focus on getting the performance correct.

Higher Level Features

We had our basic architecture figured out and tested, and it looks like this:

Slide23

A few notes about this architecture:

  • HttpClient is heavy – it handles various caching tasks, complex concurrency tasks (running event loops and maintaining thread pools, for example), and talking to the network.
  • HttpClient creates HttpRequests – each HttpRequest is very small and light. It contains a pointer back to the client that created it, so it’s common to have many requests for one client.
  • HttpRequest creates Future[HttpResponse] – the Future[HttpResponse] is tied to the HttpClient that is executing the request. That Future will be completed when the response comes back into the client.

With this architecture, we had proven to ourselves in production that we had a consistent, safe and performant HTTP client library. Our ongoing task now is to build features that make building and running systems easier for everyone who uses Newman. Here are a few higher level features that Newman has now:

  • Caching – Newman has an extensible caching mechanism that plugs into its clients. You define your caching strategy (when to cache) and backend (how and where to store cached data) by implementing interfaces. You can then plug them in to a caching HttpClient as necessary. Also, with this extensible caching system, it’s possible to build cache hierarchies. We’ve so far built an ETag and a simple read-through caching strategy and an in-memory caching backend. All ship with Newman.
  • JSON – As I mentioned at the beginning of this post, we use JSON extensively as our data serialization format over the wire, so we built it into Newman as a first class feature. Newman enables full serialization and deserialization to/from any type. Since JSON operations are built into the request and response interfaces, all client implementations get JSON functionality “for free.”
  • DSL – We built a domain specific language into Newman that makes even complex requests possible to create and execute in one line of code. The same goes for reading, deserializing, and decoding for handling responses. The DSL is standard Scala and provides more type safety on top of core Newman. Newman DSL code has become canonical.

The Result

Newman abstracts away the basics of RPC. For example, we were able to replace 10+ lines of code with the following (excluding imports and comments in both cases):

implicit val client = new ApacheHttpClient() //or swap out for another
GET(url(http, "paypal.com")).addHeaders("hello" -> "readers").apply

This code has more safety features than what it replaced in most cases and the setup and teardown complexities are written once and encapsulated inside the client. We have been pleased with Newman so far and anticipate that next steps will make Newman more powerful and useful for everyone.

The Future

We have a long list of plans for Newman. Our roadmap is open on GitHub at https://github.com/stackmob/newman/issues?state=open. The code is also open source, licensed under Apache 2.0. Read the code, file issues, request features, and submit pull requests at http://github.com/stackmob/newman.

Finally, if similar distributed systems work excites you, we build very large scale, high availability distributed systems and we’re hiring. If you’re interested, send me a GitHub, Twitter or LinkedIn message. Regardless, happy coding.

– Aaron Schlesinger – https://github.com/arschleshttps://twitter.com/arschleshttp://www.linkedin.com/profile/view?id=15144078

 

 

PayPal and OpenStack Summit 2013

By

The PayPal OpenStack Team has been hard at work driving implementation of OpenStack within PayPal.  We are now running nearly 20% of our production PayPal workloads through OpenStack and were at the 2013 Hong Kong OpenStack Summit to discuss our experiences with the build-out of the PayPal cloud implementation.

Starting with this entry, we will begin to cover our OpenStack implementation by providing technical white papers, blueprints, comments, etc.  Our goal is to contribute our knowledge back to the community and work together with everyone as a corporate leader supporting OpenStack.

We did six (6) sessions at the recently completed OpenStack Summit 2013 in Hong Kong.

Day 1:  Jonathan Pickard
PayPal’s Journey into the Cloud – Infrastructure Engineering Transformation

Day 1:  Scott Carlson
Marriage of OpenStack with KVM and ESX at PayPal

Day 1:  Yuriy Brodskiy
User Panel: How Did You Bring OpenStack Cloud to Your Company

Day 2:  Anand Palanisamy, Chinmay Naik
Lessons learned – Building the PayPal Cloud

Day 3:  Scott Carlson, Raj Geda, Zhitang Huang
HA OpenStack at PayPal   

Day 4:  Vinay Bannai
Neutron Hybrid Deployment and Performance Analysis

We love to share what we’ve done on OpenStack.   As we move forward, we will continue to promote projects like Aurora and code fixes back into the community through our open source channel located on https://github.com/paypal

Aurora is the first thing that we released back to the community. We took Netflix’s ‘Asgard’ Framework for AWS and ported it back to OpenStack, making the necessary changes for an enterprise like PayPal. It allows you to manage your cloud across multiple data centers and should work with most Folsom and Grizzly installations.

Please reach out to me on Twitter @relaxed137 with questions or to start a deeper conversation with the PayPal cloud team.

Extending genio

By

I am a great fan of Khan Academy and use it to learn more about economics and astronomy – subjects that I am far removed from professionally but fields that I find fascinating nevertheless. Khan Academy provides a simple API that is RESTful and uses OAuth 1.0 for authentication. I had been meaning to code a simple dashboard that will help me discover new videos and exercises for a while. Documentation is available on their wiki but I was in no mood to read through it – I wanted to hit the IDE right away. That’s where genio came in handy. Khan Academy does not provide a WADL but I noticed that the Khan API explorer used a JSON specification for describing the API. I decided to write a quick and dirty genio parser for the Khan API specification.

Setting up the project

I am a Java / PHP programmer with little Ruby experience but getting started with the genio gem was a breeze. I have Ruby 1.9.3 installed on my machine. First, I created a new folder called khan-parser with this minimal Gemfile.

source 'https://rubygems.org'
gem 'genio'

I have a dependency on the genio gem since I plan to use one of the in-built templates to generate PHP code. I would have listed genio-parser as my dependency if I had to write custom templates in addition. I then ran bundle install to fetch dependencies. If you do not have bundler installed on your system, get it with this simple command gem install bundler

Writing a custom parser

The next step is to write a parser that can understand the custom Khan API specification. The genio-parser uses a specification format agnostic model that is fed into its templates. This allows easy plugging in of multiple specification formats. The first release, in fact, comes with in-built parsers for WSDL, WADL and the Google discovery formats. At a high level, the API model defines

  • DataTypes – Data types as defined by the API domain.
  • Properties – Sub elements under each DataType. A DataType can have optional / mandatory properties and generally has documentation for each property.
  • Operations – Represents a unique API operation and is associated with request and response parameters
  • Services – A grouping of API operations. In the case of RESTful services, each Resource is represented by a DataType that is mapped to a Service. For SOAP services, each portType defined by the WSDL maps to a service.

The Khan parser that I set out to write essentially had to convert the JSON specification into an object tree of these model types. I created a khan-parser.rb file with a KhanParser class that extended Genio::Parser::Format::Base. The class need only implement a single method load that takes the location of the specification file to parse.

module MyApp
  class KhanParser < Genio::Parser::Format::Base

    # Load schema
    def load(filename)
    end

  end
end

RESTful APIs define operations that can be performed on resources and the API url patterns reflect this design. For example, if ‘User’ is a resource in the domain model, all user related operations are made available at, say, /api/v1/user/*. While the Khan API follows this design principle, the specification does not explicitly group operations under resources. I could have parsed the API urls to determine the resource-operation mapping but chose to keep things simple and generate all operations under a single surrogate resource.

In the load function, I parse the Khan API json into a Service object. Notice how I create objects of type Genio::Parser::Types::Service, Genio::Parser::Types::DataType and them to the global object map.

def load(filename)

  # Read the API Specification
  data = JSON.parse(data, :object_class => Genio::Parser::Types::Base, 
                    :max_nesting => 100)
  klass = class_name(filename)

  # Create the meta model tree desribing the API
  service = Genio::Parser::Types::Service.new({});
  service.operations = parse_operations(klass, data)

  # Add the definition to our global definition
  services[klass] = service;

  # Add a dummy resource (data type) with same name 
  # as our service since the Khan API does not 
  # explicitly group operations under resources
  data_types[klass] = 
            Genio::Parser::Types::DataType.new({"properties" => {}});

end

The next step is to parse the API operations in the parseoperations function. parseoperations simply iterates through the API specification and returns an array of Operation objects.

def parse_operations(service_name, data)
  operations = {};
  data.each do |options|
    operations[get_operation_name(options.http_method, 
                                  options.url)] = parse_operation(options)
  end
  return operations
end

Now, let’s look at how the Operation objects are created. The parse_operation function takes a chunk of the Khan API specification JSON that represents an operation as input. It returns a Genio::Parser::Types::Operation object with the following properties set

  • type – The HTTP method for the operation.
  • path – The relative URL path for accessing this resource
  • description – Any human readable desription of the operation that will be used to generate method comments in the generated code.
  • response – The API response type for this operation. Khan Academy’s simply return a JSON payload whose type is not defined in the specification. So, I have chosen to return the JSON response as-is as a string.
  • parameters – An array of query parameters and part parameters.
# Parse operation
def parse_operation(data)
  operation = {
  "type" => data.http_method,
  # genio expects path parameter placeholders to be wrapped between {}, 
  # for .e.g /v1/resource/{resource-id}
  "path" => data.url.gsub("path:","").gsub("", "}"),
  "description" => data.summary 
                    + "n" + data.description + "n", 
                     "response" => "string"
  }

  operation["parameters"] = {};
  data.arguments.each do |options|
    options.type = "string"
    options.location = "path" if options.part_of_url == true
    operation["parameters"][options.name] = options;
  end

  Genio::Parser::Types::Operation.new(operation)
end

And with this, our parser is ready.

Generating files

To generate files using the new parser, I need to tell genio where the parser and the templates are. I created a new file called generator.rb in the khan-generator project. I used thor, a widely used command line toolkit gem so I can pass in command line arguments to the generator. Notice that I have an include on Genio::Template. This allows me to use the built-in “templates/sdk.rest_php.erb” from genio and also provides some command line helpers.

require './khan-parser.rb'
require 'genio'
require 'thor'

class MyTask  :string, :required => true, 
               :desc => "Json schema path"
  class_option :output_path, :type => :string, :default => "output"
  class_option :package, :type => :string, :default => "MyApp", 
               :desc => "Namespace for generated class"

  desc "generate", "Geneate khan PHP SDK"
  def generate
    schema = MyApp::KhanParser.new()
    schema.load(options[:schema].strip)
    schema.data_types.each do|name, data_type|
      render("templates/sdk.rest_php.erb",
        :data_type => data_type,
        :package => options[:package],
        :classname => name,
        :schema => schema,
        :helper => Genio::Helper::PHP,
        :create_file => options[:output_path] + '/' + name + '.php');
    end
  end
end

MyTask.start(ARGV)

I ran ruby generate.rb –output-path=dashboard-app/ –schema=khan.json to test my generator and all’s good.

The dashboard app

Now, on to the dashboard application that will use the generated Khan.php class. Since I intend to use the generated wrapper class with the PayPal Rest SDK, I created a new folder ‘khan-dashboard’. I added a composer.json required by composer, a widely used dependency manager in PHP.

I added a dependency to PayPal’s REST SDK in composer.json and ran composer update --no-dev

{
  "name": "MyApp/khan-dashboard",
  "description": "Khan notifier app",
  "require": {
    "paypal/rest-api-sdk-php" : "*"
  }
}

With this skeletal project ready, I switched back to my ruby project and ran ruby generator.rb generate --schema=khan.json --output-path=../khan-dashboard to generate my service class, Khan.php. With the autogenerated Khan.php ready, now is the time to code my dashboard app. My rudimentary dashboard had this snippet

<?php
require __DIR__ . '/vendor/autoload.php';
require __DIR__ . '/khan.php'; 

# Configure the API call
$apiContext = new PayPalRestApiContext(null);
$apiContext->setConfig(
  array(
  'service.EndPoint' => 'http://www.khanacademy.org'
  )
);

# Call the get_exercises_videos method to get a video listing
$res = MyAppKhan::get_exercises_videos('logarithms_1',
                                        array(), $apiContext);

# Display results
foreach(json_decode($res) as $video) {
  ....
}

With this, my dashboard app is ready. I just ran php app.php to fetch video results using the Khan Academy API. You can find the complete code sample for the Khan custom parser at https://github.com/paypal/genio-sample/tree/master/khan-academy