Friday, January 9, 2026

Observability Made Easy - New Relic - Quick Start

A decade-long journey from custom metrics to cloud-native observability

The Evolution

Almost a decade ago, I started exploring metrics collection in software. What began as solving counting challenges has evolved into observability as a cloud service—with almost no code changes required.

Part 1: The Metric Pattern (2015)

📎 The Metric Pattern - Counting Made Easy

Tracking metrics across dimensions (locations × states) creates a cartesian product nightmare. Solution: a generic Counter<T> interface enabling Map<Priority, Counter<Result>> to collect 42+ metrics cleanly.

Part 2: Prometheus Metrics (2022)

📎 Prometheus Metrics Made Easy

With microservices, prometheus.io standardized metrics. Simple API calls enabled monitoring, but required code instrumentation and self-hosted infrastructure.

Part 3: New Relic - Zero-Code Observability (Today)

Observability is now a cloud service. With New Relic:

    ✅ No code changes for Java, Python, C#, Node.js, Go

    ✅ Built-in dashboards, distributed tracing, log aggregation

    ✅ APM, Infrastructure, Synthetic monitoring in one platform

Newrelic Pricing: 

Based on data ingestion (per GB) and users. Becomes more and more expensive with more services, but zero code change and immediate observability often justifies the ROI.

Quick Start: Java Application

Option A: Explore Without License

Experiment locally with a mock collector:

    git clone https://github.com/rdara/newrelic.git

    cd newrelic && ./gradlew :RestServer:run

    curl http://localhost:12345/greeting


⚠️ No dashboards/alerting—just agent behavior exploration.

Option B: Full Experience (With License)

    1. Sign up at newrelic.com and get your license key

    2. Download agent from New Relic

    3. Configure newrelic.yml with license key and app name

    4. Run: java -javaagent:newrelic.jar -jar your-app.jar

    5. View at one.newrelic.com → APM

Summary

2015 → Metric Pattern → High code changes → No infrastructure
2022 → Prometheus → Medium code changes → Self-hosted
Today → New Relic → ZERO code changes → Cloud

Stay tuned for more posts on New Relic covering distributed tracing and custom dashboards.

Ramesh Dara

LinkedIn : GitHub : Blogpost


Enjoy more with less. It would be nice to achieve more and more functionality and flexibility with less and less code.

Tuesday, March 24, 2015

Automation - 3 Axes for ROI and Software Quality

The Continuous Integration (CI), Continuous Delivery (CD) and Continuous Monitoring (CM) are becoming norms in today's fast-paced agile software development. This blog discusses the required "3-Axes" of automation for better return on investment (ROI) while improving software quality.
As depicted below, "WHEN, WHAT, WHOM" are the 3 most important axes in determining the effectiveness of the automation.
WHEN: An important question is when must the automation be triggered? How quickly would one know the failures? For example, when there is a code change (post-commit CI) or a code change request (pre-commit CI), the CI automation will kick-off. Monitoring will be done at regular intervals. The software component's version is checked on production machines. This automation check will be implemented at deployment as well at regular intervals. When an off-line production machine with an older version-ed component is added back, the versions do mismatch. A "Change", "Regular Interval" and/or both is required for optimal utilization of automation resources.
WHOM: Whom to communicate to is an important axis of automation. It's common to send notifications to entire "groups," thus spamming them. Rather, if the notifications are sent to those who can address the failure, this problem will be avoided. For example, in pre-commit CI, only the developer who requested for the code change will be notified. In post-commit CI, all the developers who made code changes from the prior automation run will be notified. If your post-commit CI also includes deployment and running integration tests, then the deployment failures will be notified to DevOps/SiteOps and the test code failures to quality engineers and the functional failures will be notified to developers. Make sure of no false positives.
WHAT: The most important and often ignored axis of automation is "WHAT"; the required context sensitive information to address the failure. For example, in pre or post commit CI, the code changes that caused failure and the list of failed tests is required to be included in the notification. Likewise, the list of failed machines when monitoring fails must also be included in the notification. The overall productivity of an organization greatly improves with such informative automation notifications. Be sensitive on bandwidth. It's always a bad idea to have attachments (like surefire html reports) in automation notifications. Develop programs or scripts to capture the summary of required information and references for additional information in the notification. Few items,
  • Number of Tests : xx
  • Number of Failed Tests : xx
  • Number of Skipped Tests: xx
  • Number of Successful Tests : xx
  • List of Failed Tests:
    • Test1
    • Test2,..
  • Code Changes
    • File 1
    • File 2
  • List of Machine
    • Machine1
    • Machine2
The successful and effective automation of an organization brings in good ROI by notifying those who can act on the failures in timely manner.

Tuesday, March 10, 2015

The Resource Interceptor - REST API


In the RESTful API design, its often required to deal with cross cutting concerns of a Resource.

Its a de-facto standard to have a "common resource" across the organization and all its REST APIs. The "common resource" might have common properties like "id", "creationTime", "updateTime", "caller" and common behaviors like logging, validations. Its often required to deal with the resources's cross cutting concerns and few examples are,

  • Logging the resource - You may want to log every resource, only after filtering out personally identifiable information as per compliance policies.
  • Validating resources - You may want to validate the resource through your custom validator. 
  • Populating common resource properties - You may want to populate the common properties of the resource like, "creationTime".

These resource cross cutting concerns are required to be handled after the resource creation out of json/xml/raw data, but before the resource handler, but how? We require to know when the data is deserialized to the resource in order to act on that resource. This is tricky.

A REST API call executes several filters and interceptors in pre-determined order. Once server receives POST/PUT REST API call, ContainterRequestFilters are called followed by "ReaderInterceptors".  One of those ReaderInterceptors deserialize the data to a Resource. The order of the execution of these ReaderInterceptors is un-determined. Each interceptor calls Context.proceed to make sure all the registered interceptors are executed.

In order to capture the resource, we will create our own ReaderInterceptor,  say ResourceInterceptor and let all ReaderInterceptors complete their processing including the "deserialization" interceptor and deal with our resource cross cutting concern, before returning from this interceptor.

The sample code looks like following:


public class ResourceInterceptor implements ReaderInterceptor {

    /* (non-Javadoc)
     * @see javax.ws.rs.ext.ReaderInterceptor#aroundReadFrom(javax.ws.rs.ext.ReaderInterceptorContext)
     */
    public Object aroundReadFrom(ReaderInterceptorContext context) throws IOException, WebApplicationException {
        //Let the Deserializable interceptor creates the Resource
        Object obj = context.proceed();
        if (obj instanceof Resource) {
            //Process Logging / Validation / Properties....
        } else if (obj instanceof Resource[]) {
            for (Resource resource : (Resource[]) obj) {
                //Process Logging / Validation / Properties....
            }
        }
        return obj;
    }
}

By calling context.proceed() first, within the ReaderInterceptor, we are making sure that all the ReaderInterceptors are executed and that includes deserializable ReaderInterceptor. The "obj" that we have now is a java POJO Resource and one can perform all the resource specific cross-cutting concerns on this resource.





Thursday, February 26, 2015

The Metric Pattern - Counting Made Easy

Metrics are always important and many software products do generate various metrics. Here we discuss a simple and elegant metric pattern that would be part of your organization's common library or utilities for effective reuse.

The requirement to support metrics quickly becomes cumbersome when you require the cartesian product of metrics of each item type with its possible states. As the required possible metric combinations explore, the simple traditional way of having a variable to keep track of each combination doesn't work and maintenance becomes nightmare. The suggested metric pattern will let you count anything without the explicit need of a corresponding variable by making it simple and elegant and will be part of your library.

For example, a DevOps engineer writes a monitor tool to track the health of colocated servers across geographic locations. If these servers are in 2 locations, say California and Nevada and 2 possible machine states, Up and Down, we will have 4 metrics for many machines are up/down in California/Nevada. Over time, the DevOps engineer would like to collect more granular information about the server like whether its reachable and how fast it is responding while the company is extending to more geographic locations. The cartesian product "m*n", i.e, the number of collocations multiplied by possible state of a machine, of possible metrics makes the code quickly unmanageable and unmaintainable.

Here we discuss a metric pattern which let you collect metrics thats simple, clean, and elegant that is reusable, maintainable and extensible.

First, we need a Counter interface that can simply count every possible state.
interface Counter<T> { long increment(T key, long count); long increment(T key); long get(T key); long put(T key, Long value); Map<T, Long> getMetricsMap(); }
The above Counter interface can count any possible state. By having Counter<String> , we can count any thing  TOMATOS to CARS. If the 'T' is an enum, then we have complete list of all possible states and is highly recommended for a clean design of the software.  We can define a type called Result.
public enum Result { PASS, FAIL, SKIP, WARN, EXCEPTION, EXECUTION_TIME, AVERAGEG_EXECUTION_TIME }
By defining Counter<Result>,  we can count all possible result of a test. How many tests failed, succeed  or skipped.

Next, say we want to categorize tests based on their priority like P0, P1 and P2 to track metrics. Now we want to count how many P1 test cases failed or how many P2 test cases skipped and whats the average execution time of P3 test cases. Lets define a Priority interface that has all possible priorities of a test like following.
public enum Priority { PO, P1, P2, P3, P4, NONE }
Now we define a Implementation class for Counter<T> interface that handles both test type and possible result.

public static class CounterImpl<T> implements Counter<T> {   private Map<T, Long> mapMetrics = new TreeMap<T, Long>();   private static final Long ZERO = new Long(0);   public long increment(T key, long count) { if (!getMetricsMap().containsKey(key)) { getMetricsMap().put(key, ZERO); } getMetricsMap().put(key, getMetricsMap().get(key) + count); return getMetricsMap().get(key); }   public long increment(T key) { return increment(key, 1); }   public long get(T key) { long retValue = 0; if (getMetricsMap().containsKey(key)) { retValue = getMetricsMap().get(key); } return retValue; }   public long put(T key, Long value) { getMetricsMap().put(key, value); return value; }   public Map<T, Long> getMetricsMap() { return mapMetrics; } }

Thats it! We are ready to collect all possible metrics for the cartesian product of 6 test types x 7 possible test results, or 42 metrics with a data structure of map of counters like,
Map<Priority, Counter<Result>> 
Counter and CounterImpl  will be your common library for your organization. What all you require is enums like Priority and Result (or strings) to count cartesian metrics with this simple and elegant pattern.

Your code for collecting metrics is like following:

Map<Priority, Counter<Result>> mapPriorityCounters = new ConcurrentHashMap<Priority, Counter<Result>>();   for (Priority Priority : Priority.values()) { mapPriorityCounters.put(Priority, new CounterImpl<Result>()); }   mapPriorityCounters.get(Priority.P2).increment(Result.PASS); mapPriorityCounters.get(Priority.P2).increment(Result.PASS); mapPriorityCounters.get(Priority.P3).increment(Result.SKIP, 12); mapPriorityCounters.get(Priority.P4).increment(Result.FAIL, 2); mapPriorityCounters.get(Priority.P4).increment(Result.FAIL, 2);   for (int i = 0; i < 5; i++) { mapPriorityCounters.get(Priority.P1).increment(Result.PASS); mapPriorityCounters.get(Priority.P1).increment(Result.EXECUTION_TIME, 3); long totalExecutionTime = mapPriorityCounters.get(Priority.P1).get(Result.EXECUTION_TIME); long count = mapPriorityCounters.get(Priority.P1).get(Result.PASS); mapPriorityCounters.get(Priority.P1).put(Result.AVERAGEG_EXECUTION_TIME, new Long(totalExecutionTime / count)); }   for (Map.Entry<Priority, Counter<Result>> PriorityEntry : mapPriorityCounters.entrySet()) { for (Map.Entry<Result, Long> counterEntry : PriorityEntry.getValue().getMetricsMap().entrySet()) { System.out.println(PriorityEntry.getKey() + "_" + counterEntry.getKey() + ":" + counterEntry.getValue()); } }
The result will be:
P3_PASS:12 P4_FAIL:4 P2_PASS:2 P1_PASS:5 P1_EXECUTION_TIME:15 P1_AVERAGEG_EXECUTION_TIME:3
As demonstrated in the above code snippet, this metric pattern can be used beyond counting to handle statical measures like averages.

You can refer/download the code of this pattern at Metric Pattern Java Reference Code.

We have used this pattern in Search Science (for tracking Fields, Factors, Models, Profiles), monitoring (Whether desired application version available across colos), sonar compliance (sonar metrics like coverage, violations) and generating java properties files.

A simple and elegant solution.

Thursday, October 2, 2014

Enum Creation from caseinsensitive strings - JSON / REST API


Java enum has come a long way and is widely used across programming. When we define a Java enum, the constants are case sensitive. This can become quite a nuisance when we deal with mappings to and from a string to the enum. We want to be tolerable to our clients by not strictly enforcing the case and at the same time setting the standards on how the clients receive the data with only a little Java code. This blog explains the simple, yet elegant solution to deal with enums especially in JSON/REST API applications.

The following function provides a means to create a java enum from case insensitive strings. It also implicitly validates the passed-on string and omits all possible values as a feed back to the user to recover from the error.


@SuppressWarnings("unchecked")
    public static <T extends Enum<T>> T getEnumFromString(Class<T> enumClass, String value) {
        StringBuilder errorMessageValue = null;
        if (enumClass != null) {
            for (Enum<?> enumValue : enumClass.getEnumConstants()) {
                if (enumValue.toString().equalsIgnoreCase(value)) {
                    return (T) enumValue;
                }
            }
            errorMessageValue = new StringBuilder();
            boolean bFirstTime = true;
            for (Enum<?> enumValue : enumClass.getEnumConstants()) {
                errorMessageValue.append(bFirstTime ? "" : ", ").append(enumValue);
                bFirstTime = false;
            }
            throw new IllegalArgumentException(value + " is invalid value. Supported values are " + errorMessageValue);
        }

        throw new IllegalArgumentException("EnumClass value can't be null.");
    }

Armed with such a function, a Java enum can be nicely integrated to deal with JSON/XML with a Jackson kind of JSON processor. Moreover, it seamlessly integrates for JSR-303 Validations. Following the example:

public enum BooleanEnum {
    TRUE,
    FALSE;

    @JsonCreator
    public static BooleanEnum fromValue(String value) {
        return getEnumFromString(BooleanEnum.class, value);
    }
}
And, if you want to deserialize the enum to json as lowercase, then your enum class can be like..

public enum BooleanEnum {
    TRUE,
    FALSE;

    @JsonCreator
    public static BooleanEnum fromValue(String value) {
        return getEnumFromString(BooleanEnum.class, value);
    }

    @JsonValue
    public String toJson() {
        return name().toLowerCase();
    }
}

So with such a generic getEnumFromString utility function, we can achieve...
  • enum creation from case insensitive string
  • Validation 
  • Implicit support of JSR-303 Bean Validation
  • Serialization/De serialization control to establish and comply standards.

Tuesday, June 10, 2008

Simple Timer Java Program

Timer is a very useful feature. I start here with a very simple program.

You need to schedule a task on the timer object with a runnable TimerTask extended object. The schedule can be only once or repetitive. The schedule can be on specified time or delay, which need to be mentioned in milliseconds.

Timer is thread safe and hence no need to use synchronization. The timer doesn't offer real-time guarantee but could scale to large number of concurrently scheduled tasks.

Here is the program listing..which sets a timer for 20 seconds which displays the current time and the time after 20 seconds.


import java.util.TimerTask;
import java.util.Timer;
import java.util.Date;
/**
* A simple timer program which just demonstrated the use of Timer with TimerTask.
*
* @author Ramesh Dara
*/
public class SimpleTimer {

/**
* Initiates and schedules a timer that would trigger a TimerTask in durationInMilliSecs.
*
* @param durationInMilliSecs Delay thats needed in milli seconds.
*/
public SimpleTimer(int durationInMilliSecs) {
MyTimerTask timerTask = new MyTimerTask();
Timer timer = new Timer();
timer.schedule(timerTask,durationInMilliSecs);
}

/**
* MyTimerTask is a TimerTask Runnable thread, which is started/triggered by the timer
*
* @author Ramesh Dara
*
*/
public class MyTimerTask extends TimerTask {
/**
* Here its a dummy task which just displays the message
* You can do any tasks here...like sending an email
*/
public void run() {
System.out.println("The time got expired at " + new Date());
}
}

/**
* A simple test which sets 20 seconds timer.
* @param args
*/
public static void main(String args[]) {
System.out.println("Current Time: " + new Date());
int noOfSeconds = 20;
new SimpleTimer(noOfSeconds * 1000);
}

}

Saturday, June 7, 2008

Sending email through Microsoft Outllok

The structure and program logic is very similar to that of gmail as described here.
However, proper properties are needed.

You can find out the needed properties at Outlook..

Right Click on the Outlook Shortcut on your Desktop
Click Properties
Click E-mail Accounts.. under E-mail Accounts

or

Tools -> Options -> Mail Setup Tab and
Click E-mail Accounts...


Then,

Select "View or change existing e-mail account" under E-mail
Click "Change..." while keeping the selection on Microsoft Exchange Server


Note down Microsoft Exchange Server name.


Properties props = new Properties();
props.put("mail.smtp.host", "ServerName");
//props.put("mail.smtp.port", "587"); // Not needed
props.put("mail.debug", "false");
props.put("mail.smtp.auth", "true");
//props.put("mail.smtp.starttls.enable", "true"); // Not needed


And then change your session creation with your email id and password, which is generally your windows password.


Session session = getSession(props, "email-id", "password");