Devnexus 2023 live blog

This week, I’m attending the 19th edition of the Devnexus conference in Atlanta! In those 19 years, Devnexus has truly grown to being one of the biggest Java and JVM-related conference in Nothern America, and it’s always been a pleasure to be there. This year marks my fourth attendance as a speaker, and I’ll be doing two talks myself.

In this post, I’ll be live blogging about some of the sessions that I’ve joined. Enjoy!

Five skills to force multiply your technical talent

After a short opening, Arun Gupta kicked off by doing a keynote around imoprtant skills to force multiply your technical talent. I’m not going to spoil too much about it, but the talk was completely about non-technical skills 🙂. All these skills which are nevertheless extremely important for people working in technical fields. Fun fact: this talk also had two free workshop included! Both of them where by far the shortest workshop I’ve ever attended: 30 seconds for the first one, 16 seconds for the second one.

Software architecture in a DevOps world

Next up, in the same room, Bert Jan Schrijver shared his story about software architecture in a DevOps world. We all know what DevOps is about: gradual changes, customer oriented, automation, ownership, collaboration, experimentation and continuous improvement.

Bert Jan set out to apply each of these principles to the craft of software architecture. He also introduced seagull architecture as an alternative to ivory tower architecture. A seagull architect dives down from the air, shits on a team and steals their fries. Rather, an architect should strive to actually collect the shit - that is, the feedback from the team - and make sure to incorporate that into a new iteration of the architecture.

Even though it might all sound great, there will often the “CD & DevOps won’t work here” type of objections. Some of them are easy to refute, others take more time - and soft skills - to bring the message politely. At the end of the day, the real problem may be closer to those stating it than they think…

Jakarta EE and MicroProfile Highlights

This session by Josh Juneau and Edwin Derks provides a nice walkthrough of some recent changes and additions in various Jakarta standards.

Josh started off by sharing the Jakarta highlights. Here’s my personal selection:

  • You can write Jakarta Faces (formerly JSF) views without using XHTML. Unfortunately, building such views still follows an imperative style… 😞
  • CDI Lite is a subset from the full spec, optimised for applications that perform their CDI at build-time. This should make it easier to ship CDI-based applications as native executables.
  • Jakarta has its own JSON Binding specification, which allows you to map domain objects to / from JSON representations. In a Jakarta environment, this means you don’t have to bring Jackson, Gson or any other library anymore.
  • Jakarta RESTful Web Services now supports multipart upload. This is very useful for situations where you like to upload files, such as pictures of videos.
  • If you want to use Jakarta RESTful Web Services from a non-Jakarta environment, now you can. There is a new Java SE Bootstrap API that lets you do that - but remember, you’ll have to bring an implementation, too!
  • The Jakarta Security standard adds support for OpenID Connect. Using a couple of annotations, you can easily secure your Jakarta-based application by requiring users to authenticate using a third party. This is definitely something I’m going to check out! 🤩

Edwin then continued by sharing the Microprofile highlights. Here’s my personal selection:

  • Using Microprofile Health, you can construct liveness, readiness and startup probes that Kubernetes can query to find out about your application health. Implementing the Healthcheck interface on any Jakarta component is enough! As a bonus, you can customise the response, adding more detail to a probe.
  • Using Microprofile Config you inject properties, e.g. from a Kubernetes ConfigMap, straight into your application. Annotate @Inject @ConfigProperty on a field lets you declare which configuration value you want to have. Configuration sources allow you to actually populate these using concrete values.
  • Using Microprofile REST Client you can construct an HTTP client by writing a Java interface that declares how to perform the HTTP requests. Annotate it with @RegisterRestClient lets you inject a working implementation at any place.
  • With Microprofile Fault Tolerance you can annotate methods with @Retry, @Timeout and @Fallback to declare how long you expect a particular method to take, if you have a fallback and if/when you want to perform a retry. Looks really nice, and a lot cleaner than having to implement it manually!
    • More powerful and advanced options are @CircuitBreaker and @Bulkhead. These are really advanced options, and require to properly measure and monitor how both sides of the connections.

Wired! How your brain learns new (programming) languages

First talk of Thursday was by Simone de Gijt. From her background as a speech and language therapist, she spoke about how the human brain works. To start, she goes through the various types of memory that the brain has to offer. It turns out that all of them have their own characteristics, making them more or less suitable for a particular goal. The interesting part is that this also encompasses practical tips and tricks to become more effective in reading code and understanding code!

But Simone not only shared some ‘brain hacks’ on how to become more effective in what you already know and do. From the theory of learning, she also shared practical approaches on how to improve your learning.

Learning can also be a collaborative thing. But… how to tell somebody they misunderstood the idea, or wrote the code in a wrong way. Prefer indirect feedback over direct feedback. Indirect feedback is where you don’t tell someone they’re wrong, but you rather repeat what they said with a correct example. Direct feedback is where you say “this is wrong”. Not only is the former a lot more compassionate, it’s also more likely to help the other learn in a more efficient way.

To Production and Beyond: Observability for Modern Spring Applications

To conclude, I joined Jonatan Ivanov as he discussed observability in the Spring ecosystem. The talk started with a short introduction of what observability is all about and why we should care. Environments are chaotic, we have too many unknowns, and things can be perceived differently by different observers (e.g., customers). Because of the law of big numbers, we can be sure that at any point in time, something in our complex cloud architectures is broken.

There are different types of observability: logging, metrics and distributed tracing. For each of them, Spring comes with integrations out-of-the-box: Logback for logging, Micrometer for metrics, and Sleuth (Spring Boot 2) or Micrometer Tracing (Spring Boot 3) for distributed tracing.

Rather than manually instrumenting your code with log statements, manually starting/stopping spans, and add log correlation, the new Observation API makes it a lot easier:

Observation observation = Observation.start("example", registry);
observation.error(exception)
observation.stop();

The registry is configured at startup time, and takes observation handlers (such as logging, tracing, logging, audit events, …) to wire the observations to the right information collectors.

Alternatively, to add even more context to an observation, you can declare additional key/value pairs to add to the observations:

Observation.createNotStarted("example", registry)
   .lowCardinalityValue("conference", ....)
   .highCardinalityValue("uid", ...)
   .observe(() -> {
      // actual work to be done
      // likely uses a conference and a uid parameter :-)
   });

To clarify, high cardinality means that this key (“conference”) will have an unbounded number of possible values. A low cardinality means that a key will have a bounded number of possible values.

Jonatan then continued to demo a working setup with a few Spring Boot components that leveraged Grafana, with Tempo (distributed tracing support) and Loki (log support) installed. The nice thing is that it brings all information into one big dashboard, rather than having to browse through three different dashboards. This setup makes it easy to navigate from logs to traces (using a traceID), from traces to metrics (using tags) and back (using exemplars). Very impressive setup, and I can imagine this provides a lot of value when troubleshooting issues in distributed setups!