Tag Archives: Java

Creating a PayPal / Credit Card Payment within an Android Application

By and

In this tutorial we’re going to learn how to set up the PayPal Android SDK to process a simple payment via either a PayPal payment or a credit card purchase. At the end of this example, you should have a simple button in an application that, when clicked, will forward the user to PayPal to confirm a set payment, then return the user back to the application and log the confirmation of payment.

The complete application code for this example is available in the PayPal Developer Github Repository.

Let’s get started.

The first step is to obtain and add the SDK to your project. We add the reference to our build.gradle dependancies like so:

dependencies {
    compile 'com.paypal.sdk:paypal-android-sdk:2.14.1'
    ...
}

Now we head over to our MainActivity.java file (or wherever you’d like to add the PayPal button integration), and add in a config object for our client ID and the environment (sandbox) that we will be using.

private static PayPalConfiguration config = new PayPalConfiguration()
    .environment(PayPalConfiguration.ENVIRONMENT_SANDBOX)
    .clientId("YOUR CLIENT ID");

Now we’re going to create a button in our onCreate(...) method, which will enable us to process a payment via PayPal once clicked.

@Override
protected void onCreate(Bundle savedInstanceState){
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    final Button button = (Button) findViewById(R.id.paypal_button);
}

We now need to define the functionality for that button. In your res > layout > main XML file you can add the following definition for the button, which will define the text and onClick handler for the button with the paypal_button ID.

<Button android:id="@+id/paypal_button"
    android:layout_height="wrap_content"
    android:layout_width="wrap_content"
    android:text="@string/paypal_button"
    android:onClick="beginPayment" />

When clicked, the button will call the beginPayment(...) method. We can then add the text for the button to our strings.xml file, like so:

<string name="paypal_button">Pay with PayPal</string>

With the button in place, we now have to handle the button click in order to begin payment processing. Add in the following beginPayment(...) method below our previous onCreate(...) method.

public void beginPayment(View view){
    Intent serviceConfig = new Intent(this, PayPalService.class);
    serviceConfig.putExtra(PayPalService.EXTRA_PAYPAL_CONFIGURATION, config);
    startService(serviceConfig);

    PayPalPayment payment = new PayPalPayment(new BigDecimal("5.65"), 
        "USD", "My Awesome Item", PayPalPayment.PAYMENT_INTENT_SALE);

    Intent paymentConfig = new Intent(this, PaymentActivity.class);
    paymentConfig.putExtra(PayPalService.EXTRA_PAYPAL_CONFIGURATION, config);
    paymentConfig.putExtra(PaymentActivity.EXTRA_PAYMENT, payment);
    startActivityForResult(paymentConfig, 0);
}

What we are doing here is first setting up the service intent (serviceConfig), using the config that we had defined previously for our client ID and the sandbox environment. We then specify the payment object that we want to process. For the sake of this example, we are setting a static price, currency, and description. In your final application, these values should be obtained from what the user is trying to buy in the application. Lastly, we set up the paymentConfig, adding in both the config and payment objects that we had previously defined, and start the activity.

At this point the user will be presented with the PayPal login and payment screens, allowing them to select whether to pay with PayPal or a credit card (via manual entry or card.io if the camera is available). That screen will look something like this:

PayPal payment confirmation screen

Once done, we need to have a handler ready for when PayPal forwards the user back to the application after confirmation of payment or cancellation. Let’s override onActivityResult(...) for that purpose.

@Override
protected void onActivityResult (int requestCode, int resultCode, Intent data){
    if (resultCode == Activity.RESULT_OK){
        PaymentConfirmation confirm = data.getParcelableExtra(
            PaymentActivity.EXTRA_RESULT_CONFIRMATION);
        if (confirm != null){
            try {
                Log.i("sampleapp", confirm.toJSONObject().toString(4));

                // TODO: send 'confirm' to your server for verification

            } catch (JSONException e) {
                Log.e("sampleapp", "no confirmation data: ", e);
            }
        }
    } else if (resultCode == Activity.RESULT_CANCELED) {
        Log.i("sampleapp", "The user canceled.");
    } else if (resultCode == PaymentActivity.RESULT_EXTRAS_INVALID) {
        Log.i("sampleapp", "Invalid payment / config set");
    }
}

Within the onActivityResult(...) method, we are checking to see if the resultCode that comes back is RESULT_OK (user confirmed payment), RESULT_CANCELED (user cancelled payment), or RESULT_EXTRAS_INVALID (there was a configuration issue). In the case of a valid confirmation, we get the object that is returned from the payment and, in this sample, log it. What will be returned to us should look something like the following:

{
    "client": {
        "environment": "sandbox",
        "paypal_sdk_version": "2.14.1",
        "platform": "Android",
        "product_name": "PayPal-Android-SDK"
    },
    "response": {
        "create_time": "2016-05-02T15:33:43Z",
        "id": "PAY-0PG63447RB821630KK1TXGTY",
        "intent": "sale",
        "state": "approved"
    },
    "response_type": "payment"
}

If we look under the response object, we can see that we have a state of approved, meaning that the payment was confirmed. At this point, that object should be sent to your server to confirm that a payment actually went through. For more information on those steps, see these docs.

Our last step is to cleanup in our onDestroy(...).

@Override
public void onDestroy(){
    stopService(new Intent(this, PayPalService.class));
    super.onDestroy();
}

That’s all there is to it. In this example we’ve created a simple button to process a payment with either PayPal or a credit card. From this point, there are a few next steps for you to expand upon this sample:

  • Pulling in payment information dynamically based on user product selection in the beginPayment(...) method.
  • Sending the payment confirmation to your server and verifying that the payment actually went through.
  • Handling the error and cancellation user cases within the app.

Powering Transactions Search with Elastic – Learnings from the Field

By

Introduction

We see a lot of transactions at PayPal. Millions every day.

These transactions originate externally (a customer using PayPal to pay for a purchase on a website) as well as internally, as money moves through our system. Regular reports of these transactions are delivered to merchants in the form of a csv or a pdf file. Merchants use these reports to reconcile their books.

Recently, we set out to build a REST API that could return transaction data back to merchants. We also wanted to offer the capability to filter on different criteria such as name, email or transaction amount. Support for light aggregation/insight use cases was a stretch goal. This API would be used by our partners, merchants and external developers.

We choose Elastic as the data store. Elastic has proven, over the past 6 years, to be an actively developed product that constantly evolves to adapt to user needs. With a great set of core improvements introduced in version 2.x (memory mapped doc values, auto-regulated merge throttling), we didn’t need to look any further.

Discussed below is our journey and key learnings we had along the way, in setting up and using Elastic for this project.

Will It Tip Over

Once live, the system would have tens of terabytes of data spanning 40+ billion documents. Each document would have over a hundred attributes. There would be tens of millions of documents added every day. Each one of the planned Elastic blades has 20TB of SSD storage, 256 GB RAM and 48 cores (hyper).

While we knew Elastic had all the great features we were looking for, we were not too sure if it would be able to scale to work with this volume (and velocity) of data. There are a host of non-functional requirements that arise naturally in financial systems which have to be met. Let’s limit our discussion in this post to performance – response time to be specific.

Importance of Schema

Getting the fundamentals right is the key.

When we initially setup Elastic, we turned on strict validation of fields in the documents. While this gave us a feeling of security akin to what we’re used to with relational systems (strict field and type checks), it hurt performance.

We inspected the content of the Lucene index Elastic created using Luke. With our initial index setup, Elastic was creating sub-optimal indexes. For e.g. in places where we had defined nested arrays (marked index=”no”), Elastic was still creating child hidden documents in Lucene, one per element in the array. This document explains why, but it was still using up space when we can’t even query the property via the index. Due to this, we switched the “dynamic” index setting from strict to false. Avro helped ensure that the document conforms to a valid schema, when we prepared the documents for ingestion.

A shard should have no more than 2 billion parent plus nested child documents, if you plan to run force merge on it eventually (Lucene doc_id is an integer). This can seem high but is surprisingly easy to exceed, especially when de-normalizing high cardinality data into the source. An incorrectly configured index could have a large number of hidden Lucene documents being created under the covers.

Too Many Things to Measure

With the index schema in place, we needed a test harness to measure the performance of the cluster. We wanted to measure Elastic performance under different load conditions, configurations and query patterns. Taken together, the dimensions total more than 150 test scenarios. Sampling each by hand would be near impossible. jMeter and Beanshell scripting really helped here to auto-generate scenarios from code and have jMeter sample each hundreds of times. The results are then fed into Tableau to help make sense of the benchmark runs.

  • Indexing Strategy
    • 1 month data per shard, 1 week data per shard, 1 day data per shard
  • # of shards to use
  • Benchmark different forms of the query (constant score, filter with match all etc.)
  • User’s Transaction Volume
    • Test with accounts having 10 / 100 / 1000 / 10000 / 1000000 transactions per day
  • Query time range
    • 1 / 7 / 15 / 30 days
  • Store source documents in Elastic? Or store source in a different database and fetch only the matching IDs from Elastic?

Establishing a Baseline

The next step was to establish a baseline. We chose to start with a single node with one shard. We loaded a month’s worth of data (2 TB).

Tests showed we could search and get back 500 records from across 15 days in about 10 seconds when using just one node. This was good news since it could only get better from here. It also proves an Elastic (Lucene segments) shard can handle 2 billion documents indexed into it, more than what we’ll end up using.

One take away was high segment count increases response time significantly. This might not be so obvious when querying across multiple shards but is very obvious when there’s only one. Use force merge if you have the option (offline index builds). Using a high enough value for refresh_interval and translog.flush_threshold_size enables Elastic to collect enough data into a segment before a commit. The flip side was it increased the latency for the data to become available in the search results (we used 4GB & 180 seconds for this use case). We could clock over 70,000 writes per second from just one client node.

Nevertheless, data from the recent past is usually hot and we want all the nodes to chip in when servicing those requests. So next, we shard.

Sharding the Data

The same one month’s data (2 TB) was loaded onto 5 nodes with no replicas. Each Elastic node had one shard on it. We choose 5 nodes to have a few unused nodes. They would come in handy in case the cluster started to falter and needed additional capacity, and to test recovery scenarios. Meanwhile, the free nodes were used to load data into Elastic and acted as jMeter slave nodes.

With 5 nodes in play,

  • Response time dropped to 6 seconds (40% gain) for a query that scanned 15 days
  • Filtered queries were the most consistent performers
  • As a query scanned more records due to an increase in the date range, the response time also grew with it linearly
  • A force merge to 20 segments resulted in a response time of 2.5 seconds. This showed a good amount of time was being spent in processing results from individual segments, which numbered over 300 in this case. While tuning the segment merge process is largely automatic starting with Elastic 2.0, we can influence the segment count. This is done using the translog settings discussed before. Also, remember we can’t run a force merge on a live index taking reads or writes, since it can saturate available disk IO
  • Be sure to set the “throttle.max_bytes_per_sec” param to 100 MB or more if you’re using SSDs, the default is too low
  • Having the source documents stored in Elastic did not affect the response time by much, maybe 20ms. It’s surely more performant than having them off cluster on say Couchbase or Oracle. This is due to Lucene storing the source in a separate data structure that’s optimized for Elastic’s scatter gather query format and is memory mapped (see fdx and fdt files section under Lucene’s documentation). Having SSDs helped, of course

 

Sharding the Data

Final Setup

The final configuration we used had 5-9 shards per index depending on the age of the data. Each index held a week’s worth of data. This got us a reasonable shard count across the cluster but is something we will continue to monitor and tweak, as the system grows.

We saw response times around the 200 ms mark to retrieve 500 records after scanning 15 days’ worth of data with this setup. The cluster had 6 months of data loaded into it at this point.

Shard counts impact not just read times; they impact JVM heap usage due to Lucene segment metadata as well as recovery time, in case of node failure or a network partition. We’ve found it helps Elastic during rebalance if there are a number of smaller shards rather than a few large ones.

We also plan to spin up multiple instances per node to better utilize the hardware. Don’t forget to look at your kernel IO scheduler (see hardware recommendations) and the impact of NUMA and zone reclaim mode on Linux.

Final Setup

Conclusion

Elastic is a feature rich platform to build search and data intensive solutions. It removes the need to design for specific use cases, the way some NoSQL databases require. That’s a big win for us as it enables teams to iterate on solutions faster than would’ve been possible otherwise.

As long as we exercise due care when setting up the system and validate our assumptions on index design, Elastic proves to be a reliable workhorse to build data platforms on.

squbs: packaging and deployment instructions to run on AWS nodes

By

Overview

This page describes a quick way to package, deploy, and start a squbs application. This guide uses Amazon EC2 as an example, showing how to run a squbs application in a few minutes.

You can leverage either the scala activator template or the java activator template to begin development.

Packaging

You need to install the following on your build instance

Steps to build:

  • Clone the source code from the git repo to the <project> directory
  • cd <project>
  • Run the sbt build command, including “packArchive”, such as: sbt clean update test packArchive
  • There are two archives created under <project>/target
  • <app>-<version>.tar.gz
  • <app>-<version>.zip

Start

You need to install the following on your running instance

Steps to run:

  • Copy either of the archives to the running instance
  • <app>-<version>.tar.gz
  • <app>-<version>.zip
  • For example, explode the tarball tar zxvf <app>-<version>.tar.gz to the <app>-<version> directory
  • start the application <app>-<version>/bin/run &
  • You can check the admin http://localhost:8080/adm from that instance, or http://<host>:8080/adm

Shutdown

You can terminate the running process, for example, in linux kill $(lsof -ti TCP:8080 | head -1) Since the application registers a shutdown hook with the JVM, it will shutdown gracefully, unless it is abrupt.

Amazon EC2

Log into AWS EC2 and launch an instance

  • You can create from free-tier, if the capacity meet your needs
  • Security group open (inbound) SSH – port 22, Custom TCP Rule – 8080
  • SSH into server (see AWS Console -> Instances -> Actions -> Connect)
  • Execute step Start and Shutdown as described above

Visit us at https://github.com/paypal/squbs

Lessons Learned from the Java Deserialization Bug

By

(with input from security researcher Mark Litchfield)

Introduction

At PayPal, the Secure Product LifeCycle (SPLC) is the assurance process to reduce and eliminate security vulnerabilities in our products over time by building repeatable/sustainable proactive security practices embedding them within our product development process.

A key tenet of the SPLC is incorporating the lessons learned from remediating security vulnerabilities back into our processes, tools, and training to keep us on a continuous improvement cycle.

The story behind the Java deserialization vulnerability

The security community has known about deserialization vulnerabilities for a few years but they were considered to be theoretical and hard to exploit. Chris Frohoff (@frohoff) and Gabriel Lawrence (@gebl) shattered that notion back in January of 2015 with their talk at AppSecCali – they also released their payload generator (ysoserial) at the same time.

Unfortunately, their talk did not get enough attention among mainstream technical media. Many called it “the most underrated, under-hyped vulnerability of 2015”. It didn’t stay that way after November 2015, when security researchers at FoxGlove Security published their exploits for many products including WebSphere, JBOSS, WebLogic, etc.

This caught our attention, and we began forking a few work-streams to assess the impact to our application infrastructure.

Overview of the Java Deserialization bug

A custom deserialization method in Apache commons-collections contains a reflection logic that can be exploited to execute arbitrary code. Any Java application that accepts untrusted data to deserialize (aka marshalling or un-pickling) and has commons-collections in its classpath can be exploited to run arbitrary code.

The obvious quick fix was to patch commons-collections jar so that it does not contain the exploitable code. A quick search on our internal code repository showed us how complex this process could be – many different libraries and applications use many different versions of commons-collections and the transitive nature of how this gets invoked made it even more painful.

Also, we quickly realized that this is not just about commons-collections or even Java, but an entire class of vulnerability by itself and could occur to any of our other apps that perform un-pickling of arbitrary object types. We wanted to be surgical about this as patching hundreds of apps and services was truly a gargantuan effort.

Additionally, most financial institutions  have a holiday moratorium that essentially prevents any change or release to the production environment, and PayPal is no exception. We conducted our initial assessment on our core Java frameworks and found that we were not utilizing the vulnerable libraries and therefore had no immediate risk from exploit tools in the wild. We still had to review other applications that were not on our core Java frameworks in addition to our adjacencies.

The Bug Bounty Submission

Mark Litchfield, one of the top security researchers of our Bug Bounty program submitted a Remote Code Execution (RCE) using the above mentioned exploit generator on December 11, 2015. Mark has been an active member of PayPal Bug Bounty community since 2013. He has submitted 17 valid bugs and been featured on our Wall of Fame.

As it turned out, this bug was found in one of our apps that was not on our core Java frameworks. Once we validated the bug, the product development and security teams quickly moved to patch the application in a couple of days despite the holiday moratorium.

Below are Mark’s notes about this submission in his own words:

  • It was a Java deserialization vulnerability
  • Deserialization bugs were (as of a month ago) receiving a large amount of attention as it had been overlooked as being as bad as it was
  • There were nine different attack vectors within the app

While there were nine different instances of the root cause, the underlying vulnerability was just in one place and fixing that took care of fixing all instances.

Remediation – At Scale

The next few days were a whirlwind of activity across many PayPal security teams (AppSec, Incident Response, Forensics, Vulnerability Management, Bug Bounty and other teams) and our product development and infrastructure teams.

Below is a summary of the key learnings that we think would be beneficial to other security practitioners.

When you have multiple application stacks, core and non-core applications, COTS products, subsidiaries, etc. that you think are potentially vulnerable, where do you start?

Let real-world risk drive your priorities. You need to make sure that the critical risk assets get the first attention. Here are the different “phases” of activities that would help bring some structure when driving the remediation of such a large-scale and complex vulnerability.

Inventory

  • If your organization has a good app inventory, it makes your life easy.
  • If you don’t, then start looking for products that have known exploit code like WebSphere, WebLogic, Jenkins, etc.
  • Validate the manual inventory with automated scans and make sure you have a solid list of servers to be patched.
  • For custom code, use static and dynamic analysis tools to determine the exposure.
  • If you have a centralized code repo, it definitely makes things easy.
  • Make sure you don’t miss one-off apps that are not on your core Java frameworks.
  • Additionally, reach out to the security and product contacts within all of your subsidiaries (if they are not fully integrated) and ask them to repeat the above steps.

Toolkit

Here are a few tools that are good to have in your arsenal:

  • SerializeKiller
  • Burp Extension from DirectDefense
  • Commercial tools that you may already have in-house such as static/dynamic analysis tools and scanners. If your vendors don’t have a rule-pack for scanning, push them to implement one quickly.

Additionally, don’t hesitate to get your hands dirty and write your own custom tools that specifically help you navigate your unique infrastructure.

Monitoring & Forensics

  • Implement monitoring until a full-fledged patch can be applied.
  • Signatures for Java deserialization can be created by analyzing the indicators of the vulnerability.
  • In this case, a rudimentary indicator is to parse network traffic for the hexadecimal string “AC ED 00 05” and in base64 encoded format “rO0”.
  • Whether you use open source or commercial IDS, you should be able to implement a rule to catch this. Refer to the free Snort rules released by Emerging Threats.
  • Given that this vulnerability has been in the wild for a while, do a thorough forensics analysis on all the systems that were potentially exposed.

Short-term Remediation

  • First up, drive the patching of your high-risk Internet facing systems – specifically COTS products that have published exploits.
  • For custom apps that use the Apache commons-collection, you can either remove the vulnerable classes if not used or upgrade to the latest version.
  • If the owners of those systems are different, a lot can be accomplished in parallel.

Long-term Remediation

  • Again, this vulnerability is not limited to commons-collections or Java per se. This is an entire class of vulnerability like XSS. Your approach should be holistic accordingly.
  • Your best bet is to completely turn off object serialization everywhere.
  • If that is not feasible, this post from Terse Systems has good guidance on dealing with custom code.

Summary

In closing, we understand that today’s application infrastructure is complex and you don’t own/control all the code that runs in your environment. This specific deserialization vulnerability is much larger than any of us initially anticipated – spanning across open source components, third-party commercial tools and our own custom code. Other organizations should treat this seriously as it can result in remote code execution and implement security controls for long-term remediation, and not stop just at patching the commons-collections library. To summarize:

  • We turned around to quickly patch known vulnerabilities to protect our customers
  • We are investing in tools and technologies that help inventory our applications, libraries and their dependencies in a much faster and smarter way
  • We are streamlining our application upgrade process to be even more agile in responding to such large-scale threats
  • We changed our policy on fixing security bug fixes during moratorium – we now mandate fixing not just P0 and P1 bugs – but also P2 bugs (e.g., internal apps that are not exposed to the Internet).

 

Inside the PayPal Mobile SDK 2.0

By

We’re pleased to announce the 2.0 release of the PayPal Mobile SDK. This major version update introduces important new capabilities as well as improvements under the hood. In this post we’ll highlight a few of the major changes, with some narrative about how we got here.

TL;DR? Check out the PayPal Developer Docs for an overview. In-depth docs and SDK downloads can be found in the PayPal iOS SDK on GitHub or PayPal Android SDK on GitHub.

SDK Focus

In writing the first version of the Mobile SDK last year, our team focused on four high-level values:

  1. Fast and delightful user experience making payments
  2. Simple, equally delightful developer experience taking payments
  3. Consistent, bulletproof quality across devices and platforms
  4. Opendirecttechnical communication between developers (you!) and us

To keep the bar high, we limited the first SDK release to only single payments, using PayPal or credit card (with card.io scanning). While the developers looking for a basic drop-in payment library have found the first version helpful, we knew a complete native mobile payments solution needed to offer more. So, without losing focus on simplicity, quality, and community, we’ve written the 2.0 Mobile SDK to enable seamless commerce in modern, sophisticated mobile commerce and marketplace apps.

Future Payments

The marquee feature in 2.0 is Future Payments. With Future Payments, developers can gather payment information from a customer just once, and use it to process all future payments from that customer. If you’ve added PayPal as a payment option in Uber, then you’ve used it. Uber piloted this feature with us to provide their customers with a PayPal payment option that’s fast and simple. With 2.0, we’re opening up Future Payments to all mobile developers.

So how does it work? To identify who should get paid, you give the SDK a public Oauth2 client_id as input. The SDK then authenticates a user and asks for payment authorization; it returns a short-lived OAuth2 authorization code. Your app sends that authorization code to your servers. Your servers use this code to request OAuth2 access and refresh tokens with the future_payments scope. You can use the access token right away to charge the user, and store the refresh token for the future. Any time that you need to make payment API requests for this user in the future, you can exchange that refresh token for an access token, and you’re off to the races — no need to bother the user again.

Phew! It sounds more complicated than it actually is, and the SDK includes step-by-step integration guides:

Auth & Capture

One frequently requested feature is the option to decouple authorization of a payment from capture at a later time. This is a standard capability offered by credit card gateways. It is useful, for example, when shipping physical goods – you can authorize your customer’s payment when the order is placed, then later capture the funds only when the goods actually ship.

The 2.0 SDK uses new, flexible PayPal REST payment APIs – you can create a payment with either a sale or authorization intent. Thanks to the improved APIs, our team was able to add Auth/Capture support with only minor code changes. And, notably, the intent setting works identically for PayPal and credit card payments.

REST and OAuth 2.0

As PayPal continues to optimize mobile-first platform offerings, the PayPal Mobile SDK team works closely with counterparts in the API teams to set the roadmap and nail the integration with the OAuth2 Identity services and REST Payments API.

The 2.0 SDK now uses only the new PayPal APIs. Migrating to these APIs was critical to making the 2.0 SDK a reality. Now that we’re on the new stack, the SDK is well positioned to gain new capabilities in pace with the platform. We look forward to adding further improvements!

Check it Out

Learn more about the 2.0 PayPal Mobile SDKs by checking out the PayPal Developer Docs. When you’re ready to take a look at the APIs, sample code, README, and integration docs, head on over to GitHub:

We’d love to hear your integration experience and feedback.

About the PayPal SDK Team

We’re a distributed team (Portland, Austin, San Jose) that creates great tools for developers and delightful experiences for their users. When not releasing sweet PayPal SDKs, we can be found variously drinking copious amounts of coffee, writing science fiction, biking long distances, snuggling with a Ziggy, or driving down the California coast. Want to get in touch or see what we’ve worked on? Find us on GitHub: DaveBrentTomMattJeffJuwonAviJosh, AbhijeetMikeAvijit, Roman, and Bunty.