Configuring Jersey Servlet Container on Embedded Tomcat

One of the ways of configuring a Jersey Servlet Container on Tomcat is via a web.xml file.
For example, typically the following web.xml file is used where a Servlet and its mappings are defined:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>  
<web-app xmlns=""  



When starting an embedded version of Tomcat the same Jersey Servlet that was defined in the XML file can be defined via code:

    RestApplication restApplication = new RestApplication();
    ServletContainer servletContainer = new ServletContainer(restApplication);
    Tomcat.addServlet(ctx, "api", servletContainer);
    ctx.addServletMapping("/api/*", "hubble-api");

There are three steps involved in configuring Jersey:

Tomcat Embedded Server Configuration

Tomcat initially started as a web server container, meaning that one Tomcat server could host several web applications. With the popularity of languages such as Ruby and Node growing and frameworks like rails and express becoming very popular for web applications the new pattern of embedding the web server as part of the application itself removing the need of any external dependencies became very common.
Even though this feature is not very well documented, Tomcat does support an embedded configuration.

Below is an example of converting a standard server.xml to an embedded server.

<?xml version='1.0' encoding='utf-8'?>

<Server port="8010" shutdown="SHUTDOWN">  
  <!--APR library loader. Documentation at /docs/apr.html -->
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <!-- Prevent memory leaks due to use of particular java/javax APIs-->
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

    <Service name="Catalina">

        <Connector port="8080"
                   redirectPort="9410" />

        <Engine name="Catalina" defaultHost="localhost">

            <Host name="localhost"  appBase="webapps"
                  unpackWARs="true" autoDeploy="false">



The main components defined in the XML file are:

  • Server
  • Listeners
  • Service
  • Connector
  • Engine
  • Host

When configuring an embedded Tomcat, the xml file listed above can be converted to java code as follows:

Tomcat tomcat = new Tomcat();  
Path tempPath = Files.createTempDirectory("tomcat-base-dir");  

// Configure connector
Connector connector = new Connector();  
connector.setProperty("maxThreads", "100");  
connector.setProperty("minSpareThreads", "100");  
connector.setProperty("compression", "on");  
connector.setProperty("compressableMimeType", "application/json");  
connector.setProperty("connectionTimeout", "8000");  

// Configure tomcat life cycle listeners
JreMemoryLeakPreventionListener jreMemoryLeakPreventionListener = new JreMemoryLeakPreventionListener();  
GlobalResourcesLifecycleListener globalResourcesLifecycleListener = new GlobalResourcesLifecycleListener();  
ThreadLocalLeakPreventionListener threadLocalLeakPreventionListener = new ThreadLocalLeakPreventionListener();  
VersionLoggerListener versionLoggerListener = new VersionLoggerListener();  

// Create web context
File webContentFolder = Files.createTempDirectory("default-doc-base").toFile();  
StandardContext ctx = (StandardContext) tomcat.addWebapp("", webContentFolder.getAbsolutePath());

// Disable jar scanner for better start up performance
StandardJarScanFilter jarScanFilter = (StandardJarScanFilter) ctx.getJarScanner().getJarScanFilter();  


*It is important to set both the default Tomcat connector as well as the service connector the application will use. By default, Tomcat configures an HTTP/1.1 connector listening on port 8080, so if only the service connector is set Tomcat will try to load both connectors at once.

Golang Reflection and Interfaces

Before diving into interfaces and reflection I want to give a bit of a background on the use case I had to apply them to.

Manage Struct metadata as form of JSON components

In most data designs data is still relational. Non Relational databases have their use case but when dealing with Account-User Management, RBAC and other relational data models a relational database still is the best tool of choice.
In an iterative development process all columns of a table might not be known before hand and having a framework to quickly iterate on top becomes very handy. For example; instead of having to add new columns to a table the concept of a JSON component column can be used where data points that are not searched can be stored in a JSON string, which allows for the data model to be defined at the application level. Projects like OpenStack and Rancher already follow that strategy.
UPDATE: MySQL version 5.7.8 introduced native support for JSON:

Implementing JSON components in Go

A StructTag can be used to define which attributes should be stored in the JSON component.
When persisting the struct to the database the component attributes would be added to the JSON Components string and only the Components string would be persisted to the database.

type Stack struct {  
    StringComponent string  `json:"stringComponent,omitempty" genesis:"component"`
    IntComponent    int `json:"intComponent,omitempty" genesis:"component"`
    BoolComponent   bool    `json:"boolComponent,omitempty" genesis:"component"`
    FloatComponent  float64 `json:"floatComponent,omitempty" genesis:"component"`
    Components  string

First attempt at Reflection

At first I created a method in the Stack type which would iterate over all its attributes and build up a JSON string based on the attributes that were tagged with the component StructTag

func (s *Stack) prep() {  
    components := map[string]interface{}{}
    fields := reflect.TypeOf(s).Elem()
    values := reflect.ValueOf(s).Elem()
    for i := 0; i < fields.NumField(); i++ {
        field := fields.Field(i)
        value := values.Field(i)
        if isComponent(field) {
            components[field.Name] = value.Interface()
    c, err := json.Marshal(components)
    if err != nil {
        fmt.Printf("Error creating components JSON object %v\n", err.Error())
    jsonString := string(c[:])
    s.Components = jsonString

Go Playground sample

The main problem with the approach listed above is that the prep method is tied to the Stack struct and other structs can't reuse it.

Second attempt at Reflection

Instead of calling prep as a struct method, by making Prep a public function then any struct can be provided as an argument and the method will take care of building the JSON component string via reflection.

func Prep(obj interface{}) {  
    components := map[string]interface{}{}
    fields := reflect.TypeOf(obj).Elem()
    values := reflect.ValueOf(obj).Elem()
    for i := 0; i < fields.NumField(); i++ {
        field := fields.Field(i)
        value := values.Field(i)
        if isComponent(field) {
            components[field.Name] = value.Interface()
    c, err := json.Marshal(components)
    if err != nil {
        fmt.Printf("Error creating components JSON object %v\n", err.Error())
    jsonString := string(c[:])

Go Playground sample

Some important points to keep in mind:

  • Go has static and underlying types
  • Difference between Types and Kinds:
    • A Kind represents the specific kind of type that a Type represents.
    • In other words, a Kind is the underlying type and the Type is the static type
    • Example:

Even though the obj argument is an empty interface, its Kind at runtime will be of Pointer and its Type of *Stack (or whatever struct is passed in)
That is important to understand since to manipulate struct fields with reflection you need to have a Struct Kind.
For example; the following statement would panic:




Accessing a Struct from a Pointer

Initially this might seem a bit alien, however, I like to relate to how you can deference pointers in C.
In C you can use the -> notation to access the values a pointer points to:


struct Stack {  
  int x;

int main(void) {  
  struct Stack s; 
  struct Stack *p;
  p = &s;
  s.x = 1;
  printf("s.x %d\n", s.x); // 1
  printf("&s %p\n", &s); // address of s
  printf("p %p\n", p); // address of s
  printf("&p %p\n", &p); // address of pointer
  printf("*p.x %d\n", p->x); // 1
  return 0;

In Golang the concept is similar but the syntax is a bit different. You can use the Elem() method which returns the value the interface contains. In the case of a pointer it will return the object that is being referenced to.


and since the pointer is pointing to a Struct the Field method can be utilized


Configuring H2 database for JUnit Tests with OpenJPA

When it comes time to write unit tests for an application most likely then not there will be a scenario where the tests would have to communicate to a database backend. However, using the main database of the application during the unit tests run can pollute and corrupt the data so the recommended approach is to use a dedicated database just for the tests.

Provisioning and maintaining a full database like MySQL or PostgreSQL can have a bit of an overhead. For that regards the most suited database for unit tests is H2

The area where H2 excels is that it can be provisioned before the test suite run starts and at the end removed it since it is a in memory database.

Below are the steps necessary for configuring a OpenJPA Entity Manager Factory using the H2 driver:

Add maven dependency


Create Data Source

PoolConfiguration props = new PoolProperties();  
dataSource = new DataSource();  

Create Entity Manager Factory

Properties jpaProps = new Properties();  
jpaProps.put("openjpa.ConnectionFactory", dataSource);  
jpaProps.put("openjpa.Log", "log4j");  
jpaProps.put("openjpa.ConnectionFactoryProperties", "true");  
entityManagerFactory = Persistence.createEntityManagerFactory("myfactory", jpaProps);  

Using NATS instead of HTTP for inter service communication

In the new micro service architecture the traditional monolithic application is broken down into multiple small scoped domains. That brings several benefits both from a development perspective since micro services can be iterated individually and also performance since they can be scaled separately.

Now with the move to a micro service architecture one common problem is how to make all the services communicate with one another?
Typically an application exposes a REST API interface to the client so naturally the first option is to start using HTTP.
The question then becomes, is HTTP the best messaging protocol for inter service communication?

Looking at the HTTP's definition from the official RFC

The Hypertext Transfer Protocol (HTTP) is an application-level  
protocol for distributed, collaborative, hypermedia information  
systems. It is a generic, stateless, protocol which can be used for  
many tasks beyond its use for hypertext, such as name servers and  
distributed object management systems, through extension of its  
request methods, error codes and headers.  

It is clear that HTTP wasn't built to provide a communication layer between micro services.

On the contrary, looking at the NATS protocol definition

Open Source. Performant. Simple. Scalable.  
A central nervous system for modern, reliable,  
and scalable cloud and distributed systems.  

It becomes clear that it was built with new modern systems in mind.

One thing to keep in mind is that a
RESTful API could be built on top of NATS across all micro services. The key point is that HTTP is not the best protocol for inter service communication.


In theory it all sounds good, but in practice how much faster is NATS?
To test both approaches a simple benchmark was put together trying to mimic a real world example.

The idea is that a HTTP proxy server will accept client connections and then route the request to the appropriated internal micro service. In one scenario HTTP was used to proxy the messages and on the other NATS.

For the HTTP proxy a node-js express server was used and for the microservice a golang app using the httprouter and the official NATS go-client

To generate the client traffic that would be hitting the proxy the bench-rest tool was used.

Sequential requests with no payload

Overall NATS performed 3x faster than HTTP.
To process 50,000 requests using HTTP as the messaging protocol between the proxy and the micro service it took ~12 mins while using NATS it only took 5.

Sequential requests with 1K payload

When adding some data to the requests the behavior was similar and NATS still processed messages ~3x faster than HTTP.

Parallel requests

When a factor of concurrency was added to the tests that's when NATS performed the best.
To process 50,000 requests with a batch of 2,500 concurrent requests at a time NATS took 1.6 mins while HTTP took 18 mins.

Overall the results shown that NATS beats HTTP both in speed and throughput.

The source code used for the benchmarks can be found here