A few weeks ago, we published a post detailing our internal graph domain specific language (DSL) which we built to query our transaction database. Since then, we have been performing some load and performance testing within our production environment. Specifically, we tested the performance of our OLTP persistence model and the latency of our OLAP entity extraction jobs which use our DSL. In this post, we detail the timing and volume statistics around the load process, entity extraction traversals, and model persistence. Additionally, this post contains an interactive example of the data we persisted and outlines our testing process.
The PokitDok APIs persist the X12 transactions into our Titan graph; an interactive sample of X12 transaction data is below. For the performance tests described in this post, we persisted varying amounts of different X12 transaction types into the graph database. The graph below shows three different graph structures for the transactions used in these tests: 270, 271, and 837 transactions. Open up the image in a new tab to explore the data further.
We designed this partition of our graph to function as an OLTP database to handle the heavy write use case of our healthcare APIs. From these transactions, we use our domain specific language (DSL) to traverse, extract, and persist entity models for OLAP processing. Given this pipeline, we created a series of tests to track the load statistics for persistence and entity extraction from the API call to DSL usage.
Our API infrastructure for real time entity extraction, model persistence, and matching is outlined in Figure 1. We persist an X12 API call via our dynamic JSONLoader (open source coming soon) in a graph database. We also kick off a job to use our graph DSL to extract and persist the entity models observed in this transaction. Lastly, we relate the entity model to the transaction in which it was observed. As we merge duplicate entities, this edge relates the merged representation of an entity to all transactions in which it was observed.
To gather load statistics about this model, we spun up a pipeline to hit our graph with varying scenarios of API loads while also monitoring the entity extraction and persistence jobs. Each individual run persists ratios of simulated eligibility requests, eligibility responses and healthcare claims. On average, an individual X12 transaction contains up to 177 vertices (check out our previous post on the structure of X12 json). Each transaction is persisted as a tree due to the incoming structure of the JSON payload, and therefore contains 176 edges. A fundamental theorem in graph theory is that if a tree has a total of |V| vertices, then the number of edges it contains, |E|, is |V| - 1. Check out a version of the proof here.
Across this simulation, Table 1 lists the distribution of persisted vertex count and edge count with the number of simulated API calls which varies in each run. By applying the above theorem, we can use the total number of persisted vertices on each load test to calculate the number of edges in the graph. The additional step required implies that every individual persisted transaction is a disconnected sub-tree. As such, the final number of edges after persisting T total API calls is |V| - 1 - T.
|Total API Calls||Total Vertices||Total Edges||Persistence Time (seconds)|
While persisting the API calls into the graph db, we also extract the entity information using the DSL traversals over the X12 transaction graph. Table 2 outlines the number of entities traversed, extracted, and persisted by this subroutine. Future, we calculated the wall clock time across the total number of trials starting from the API call through the round trip persistence of the entity to the Cassandra backend and the resulting graph query which verified entity extraction completion.
|Total API Calls||Total Payor Entities Extracted||Total Consumer Entities Extracted||Total Provider Entities Extracted||Extraction Time via the DSL(seconds)|
We ran the jobs across on stand alone machines with Ubuntu 14.04, Intel Core i5-5300u CPU@2.3GHZ with 16GB RAM as well as on a 2.5 GHz Intel Core i7, 16 GB 1600 MHz DDR3 MacBookPro with 10.10.5 OSX. We did not use a cluster in these runs.
The performance of our JSONLoad and DSL entity extraction job is performant for the heavy write load of our APIs. In a future post, we will release the results of our load tests on our EC2 cluster in AWS and examine the p99 latency. In the meantime, if you would like to inspect the setup of our current environment, you can grab our docker container that comes preloaded with our Gremlin-Python library and graph infrastructure. Lastly, we are also extracting the PokitDok specific components of the JSONLoader and are preparing for an open sourced version.
We are currently converting our DSL from TinkerPop 2.5.0 over to TinkerPop 3.2. Specifically, we are following the development and conversation around the DSL on this ticket. Once there, we will move on to using the GraphComputer as available in TP3 for improved OLAP processing.
- Snake_Byte #23: Counter - March 10, 2017
- Titan Load Statistics - January 26, 2016
- HealthGraph Domain Specific Language: A Semantic Data Retrieval Layer - January 12, 2016
- AWS Presents: Managing a Healthcare System with Blockchain - June 28, 2017
- DocRank: Computer Science Capstone Project with the College of Charleston - June 5, 2017
- Notes from the Field: Consensus 2017 - May 31, 2017
- Snake_Byte #28: Term x Document Matrices via Sklearn's DictVectorizer - April 21, 2017
- DC Blockchain Codeathon - April 11, 2017
- Snake_Bytes #5: Vector Distances - October 12, 2016
- TinkerPop and Titan from a Python State of Mind: A Tutorial via Docker - December 15, 2015
- Online Recommendation Systems: Getting Personal with Gremthon - Part 2 - April 14, 2015
- Online Recommendation Systems: What it Really Means to be 'Somebody Like You' - Part 1 - April 7, 2015
- The top 3 things we love most about the PokitDok HealthGraph - January 27, 2015