<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Code 'n' Roll - Rocking the computer]]></title><description><![CDATA[Code 'n' Roll - Rocking the computer]]></description><link>https://blog.code-n-roll.dev</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 18:06:57 GMT</lastBuildDate><atom:link href="https://blog.code-n-roll.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Kotest + Spring + Testcontainers]]></title><description><![CDATA[Recently, I wanted to rewrite a JUnit integration test for my Spring Boot application. Since it was an integration test, I used Testcontainers to start a database. While there are Kotest extensions for Spring and Testcontainers we have the problem th...]]></description><link>https://blog.code-n-roll.dev/kotest-spring-testcontainers</link><guid isPermaLink="true">https://blog.code-n-roll.dev/kotest-spring-testcontainers</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[Kotest]]></category><category><![CDATA[Testcontainers]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Sat, 16 Nov 2024 10:25:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731751825319/ea567bcd-b08b-48ff-9442-5aabc6643976.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, I wanted to rewrite a JUnit integration test for my Spring Boot application. Since it was an integration test, I used Testcontainers to start a database. While there are Kotest extensions for <a target="_blank" href="https://kotest.io/docs/extensions/spring.html">Spring</a> and <a target="_blank" href="https://kotest.io/docs/extensions/test_containers.html">Testcontainers</a> we have the problem that both do not cooperate.</p>
<p>The Testcontainers extension does not integrate with Spring in a way that ensures the container starts early enough for Spring and overrides the Spring properties (e.g., <code>spring.data.jdbc.url</code>).</p>
<p>While searching for a solution, I came across several articles that seemed somewhat outdated. Therefore, I wanted to share my solution here. The articles I found were:</p>
<ul>
<li><p><a target="_blank" href="https://jakubpradzynski.pl/posts/example-spring-kotest-testcontainers-mongodb-project/">Bootstrap project with Spring Boot, Kotest, Testcontainers &amp; MongoDB | Jakub Prądzyński's Blog</a></p>
</li>
<li><p><a target="_blank" href="https://akobor.me/posts/using-testcontainers-with-micronaut-and-kotest">Using Testcontainers with Micronaut and Kotest | @akobor</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/kotest/kotest/issues/1649">TestContainers Extention and Spring boot startup timing · Issue #1649 · kotest/kotest</a></p>
</li>
</ul>
<p>What all of those solutions have in common is that they wrap the container and then control its lifecycle. This could be achieved through extension functions, static methods, or an abstract test superclass. Having all the lifecycle callbacks, you can then start and stop the container before the Spring application context starts and override the properties.</p>
<p>Although these approaches work, they seemed unnecessarily complex to me.</p>
<h2 id="heading-starting-point">Starting point</h2>
<p>What I started with was a common Spring JUnit test with Testcontainers:</p>
<pre><code class="lang-kotlin"><span class="hljs-meta">@ActiveProfiles(<span class="hljs-meta-string">"default"</span>, <span class="hljs-meta-string">"test"</span>)</span>
<span class="hljs-meta">@Testcontainers</span>
<span class="hljs-meta">@SpringBootTest</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SomeIntegrationTest</span></span>() {
    <span class="hljs-keyword">companion</span> <span class="hljs-keyword">object</span> {
        <span class="hljs-meta">@Container</span>
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> postgres = PostgreSQLContainer(<span class="hljs-string">"postgres:16.1"</span>)

        <span class="hljs-meta">@JvmStatic</span>
        <span class="hljs-meta">@DynamicPropertySource</span>
        <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">properties</span><span class="hljs-params">(registry: <span class="hljs-type">DynamicPropertyRegistry</span>)</span></span> {
            with(registry) {
                add(<span class="hljs-string">"spring.flyway.url"</span>) { postgres.jdbcUrl }
                add(<span class="hljs-string">"spring.flyway.user"</span>) { postgres.username }
                add(<span class="hljs-string">"spring.flyway.password"</span>) { postgres.password }
                add(<span class="hljs-string">"spring.r2dbc.url"</span>) {
                    postgres.jdbcUrl.replaceFirst(<span class="hljs-string">"jdbc"</span>, <span class="hljs-string">"r2dbc"</span>)
                }
                add(<span class="hljs-string">"spring.r2dbc.username"</span>) { postgres.username }
                add(<span class="hljs-string">"spring.r2dbc.password"</span>) { postgres.password }
            }
        }
    }
<span class="hljs-comment">// setup, teardown and tests go here</span>
...
}
</code></pre>
<p>While I could keep the <code>@SpringBootTest</code> annotation due to the Kotest extension <code>@Testcontainers</code> , which helps with the container lifecycle, does not work with Kotest.</p>
<h2 id="heading-the-kotest-version">The Kotest version</h2>
<p>What my predecessors did not have at their time was the <code>@ServiceConnection</code> annotation. You can read more about it <a target="_blank" href="https://spring.io/blog/2023/06/23/improved-testcontainers-support-in-spring-boot-3-1">here</a>. Given a standard Postgres container, it should be possible to use this annotation. So, I experimented and rewrote the test like this:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">import</span> io.kotest.extensions.testcontainers.perSpec

<span class="hljs-meta">@ActiveProfiles(<span class="hljs-meta-string">"default"</span>, <span class="hljs-meta-string">"test"</span>)</span>
<span class="hljs-meta">@SpringBootTest</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SomeIntegrationTest</span> : <span class="hljs-type">StringSpec</span></span>() {

    <span class="hljs-keyword">companion</span> <span class="hljs-keyword">object</span> {
        <span class="hljs-meta">@ServiceConnection</span>
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> postgres = PostgreSQLContainer(<span class="hljs-string">"postgres:16.1"</span>)
            .apply { <span class="hljs-keyword">this</span>.start() }
    }

    <span class="hljs-keyword">init</span> {
        listener(postgres.perSpec())
        <span class="hljs-comment">// setup, teardown and tests go here</span>
        ...
    }
}
</code></pre>
<p>I also encountered the issue where the container started too late for Spring. To address this, I start the container explicitly in the companion object. Registering the container as a listener using <code>perSpec()</code> then managed the rest of the lifecycle.</p>
<p>With <code>spring-boot-testcontainers</code> in the classpath, there’s no need to override properties manually anymore. We can just add the <code>@ServiceConnection</code> annotation.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>So, all in all, instead of having to write our own lifecycle wrapper we just use <code>perSpec()</code> and the manual start with <code>.apply { this.start() }</code> and have a nicely working integration test. The <code>@ServiceConnection</code> annotation could also have been applied to the original test, but it’s great to see that it works seamlessly with the Kotest Spring extension.</p>
]]></content:encoded></item><item><title><![CDATA[Kotest - YAML matchers]]></title><description><![CDATA[I am happy to tell you all about an upcoming Kotest feature that will be in one of the future versions (hopefully the next one ;) ) of Kotest: YAML matchers
What makes those matchers special for me is that they are my first contribution to the Kotest...]]></description><link>https://blog.code-n-roll.dev/kotest-yaml-matchers</link><guid isPermaLink="true">https://blog.code-n-roll.dev/kotest-yaml-matchers</guid><category><![CDATA[Kotest]]></category><category><![CDATA[Kotlin]]></category><category><![CDATA[YAML]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Tue, 05 Nov 2024 14:01:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730451635193/9e7228ea-1572-4ad6-9cb6-e87f748e14d2.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I am happy to tell you all about an upcoming Kotest feature that will be in one of the future versions (hopefully the next one ;) ) of Kotest: YAML matchers</p>
<p>What makes those matchers special for me is that they are my first contribution to the Kotest project. So, “achievement unlocked”, I could say.</p>
<p>The YAML matchers currently cover two aspects:</p>
<ul>
<li><p>valid YAML</p>
</li>
<li><p>equal YAML</p>
</li>
</ul>
<p>and for those, you have four methods:</p>
<ul>
<li><p><code>shouldBeValidYaml</code></p>
</li>
<li><p><code>shouldNotBeValidYaml</code></p>
</li>
<li><p><code>shouldEqualYaml</code></p>
</li>
<li><p><code>shouldNotEqualYaml</code></p>
</li>
</ul>
<p>The usage is quite simple. All of the methods are extension functions on <code>String</code>. Therefore, when importing from <code>io.kotest.assertions.yaml</code> you can call those on any String. E.g. when checking for valid YAML:</p>
<pre><code class="lang-kotlin"><span class="hljs-string">"""
foo: bar
"""</span>.shouldBeValidYaml()
</code></pre>
<p>Equally simple, when comparing YAML (notice the infix notation):</p>
<pre><code class="lang-kotlin"><span class="hljs-string">"""
key: value
"""</span> shouldEqualYaml <span class="hljs-string">"key: value"</span>
</code></pre>
<p>Hopefully, this will help others improve their tests. Additionally, I’d be happy to add more YAML matchers if others have suggestions. Lastly, the whole feature is multiplatform ready, so you can use it in your KMP project.</p>
<h2 id="heading-implementation-details">Implementation details</h2>
<p>If anyone is interested, I can go more into the details. It is pretty simple actually. The base for the matchers is the <a target="_blank" href="https://github.com/charleskorn/kaml">KAML</a> library. It is used for parsing the YAML. Parsing the YAML is already everything that is needed for checking the validity of the YAML. If KAML throws an exception we know that the YAML isn’t valid and the check fails (or passed if you check for not valid).</p>
<p>When checking for equality, we also start with parsing the YAML. Then, we compare the contents of the YAML, again relying on KAML.</p>
<p>And that’s all of the magic in the background.</p>
<h2 id="heading-multiplatform-and-java-8">Multiplatform and Java 8</h2>
<p>What had been quite a journey for me was the multiplatform and Java 8 compatibility. Since Kotest supports all multiplatform targets until tier 3 I tried to also cover those targets (I could’ve chosen an easier way but that was my personal challenge). I then found out that KAML does not support all multiplatform targets. Changing this was a quite easy <a target="_blank" href="https://github.com/charleskorn/kaml/pull/629">change</a>.</p>
<p>Unfortunately, <a target="_blank" href="https://github.com/krzema12/snakeyaml-engine-kmp">SnakeYAML Engine KMP</a>, on which KAML is based, did not support Java 8. Therefore, it was incompatible with Kotest. In order to have <a target="_blank" href="https://github.com/krzema12/snakeyaml-engine-kmp/pull/255">Java 8 compatibility</a> there, I had to go one level deeper (insert random Inception GIF here).</p>
<p>SnakeYAML uses an <a target="_blank" href="https://github.com/ethauvin/urlencoder">urlencoder</a> and this one was the last link in the chain. So, in order to finish my YAML matchers, I had to start here. So I <a target="_blank" href="https://github.com/ethauvin/urlencoder/pull/19">adjusted the urlencoder</a>, in order to modify SnakeYAML, to have update it’s version in KAML. Finally, I could implement the YAML matchers with full multiplatform support.</p>
<p>Of course, I didn’t do everything by myself. I received much help by the maintainers of the respective projects. I'd like to add that everyone was super helpful. It was quite a very nice open source experience.</p>
<p>Although it took longer than expected and more hops than ever expected, I’m glad that I could finish this feature.</p>
]]></content:encoded></item><item><title><![CDATA[Course Review: Udacity AI Programming with Python]]></title><description><![CDATA[1. Introduction
Beginning in December, I had the chance to participate in a Bertelsmann scholarship. I was allowed to start the Udacity nanodegree "AI Programming with Python" for free. Last month, I finished the nanodegree successfully, and therefor...]]></description><link>https://blog.code-n-roll.dev/course-review-udacity-ai-programming-with-python</link><guid isPermaLink="true">https://blog.code-n-roll.dev/course-review-udacity-ai-programming-with-python</guid><category><![CDATA[Python]]></category><category><![CDATA[AI]]></category><category><![CDATA[UdacityNanodegree]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Tue, 23 Jul 2024 13:22:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721646975601/29839b86-d6fa-4780-8e3d-530568678a03.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1-introduction">1. Introduction</h2>
<p>Beginning in December, I had the chance to participate in a <a target="_blank" href="https://www.bertelsmann-university.com/de/employee-scholarship-program.html">Bertelsmann scholarship</a>. I was allowed to start the Udacity nanodegree "AI Programming with Python" for free. Last month, I finished the nanodegree successfully, and therefore I wanted to write a short review about the course.</p>
<h2 id="heading-2-udacity-overview">2. Udacity Overview</h2>
<p>If you do not know Udacity, let me give you a short overview. Udacity provides online classes similar to Coursera or Udemy. As far as I know, most of the classes cover technical or management content. What makes Udacity unique is that they provide so-called "nanodegrees." Basically, this means the classes are more in-depth, and your projects will be checked by humans.</p>
<p>If you want to know more about nanodegrees, check out this blog post: <a target="_blank" href="https://www.udacity.com/blog/2016/07/nanodegree-101.html">Nanodegree 101: What is a Nanodegree Program? | Udacity</a></p>
<h2 id="heading-3-description-of-the-course">3. Description of the Course</h2>
<p>As I already mentioned, I chose "AI Programming with Python" as nanodegree. At first I thought that I could easily finish the class by Spring but I have to admit that I really needed nearly the whole time we were given.</p>
<p>The nanodegree is split into seven courses:</p>
<ol>
<li><p>Introduction to AI programming</p>
</li>
<li><p>Introduction to Python for AI Programmers</p>
</li>
<li><p>Numpy, Pandas, Matplotlib</p>
</li>
<li><p>Linear Algebra Essentials</p>
</li>
<li><p>Calculus Essentials</p>
</li>
<li><p>Neural Networks - AI Programming with Python</p>
</li>
<li><p>Create Your Own Image Classifier (the final project)</p>
</li>
</ol>
<p>As you can see, the course starts with basic Python and progresses to more advanced libraries. Then there is a big part covering the math behind neural networks. Lastly, you finally program your own neural network.</p>
<p>To summarize, the nanodegree consists of math and Python. 😂</p>
<h2 id="heading-4-course-content-review">4. Course Content Review</h2>
<p>I am not a Python developer, but I had tried some Python before. Therefore, the language basics covered at the beginning were quite easy for me. The parts that covered the libraries more in-depth (e.g., Pandas and Pytorch) were more complicated but manageable. Most of the programming took place inside the browser using a virtual machine that either ran a simple text editor or a Jupyter notebook. For the short exercises and the first project, this was sufficient. For the final project, I decided to use my local IDE and GitHub, which is also supported.</p>
<p>The courses covered the math basics, which was okay. The math behind neural networks was more challenging for me. It had been a while since I had my last math lecture. Still, especially the videos provided by <a target="_blank" href="https://www.3blue1brown.com/">3Blue1Brown</a> were easy to understand.</p>
<p>Lastly, the final project also took some time to finish. I had to look up previous lessons and read the Pytorch documentation to find what I needed. I suppose for somebody who is not a developer, this must be much harder.</p>
<h2 id="heading-5-assessment-amp-feedback">5. Assessment &amp; Feedback</h2>
<p>I was surprised by the detailed assessment when I received the feedback for my first project. I thought that some automated checks would run over the code (e.g., linting and unit tests) after turning it in. Instead, I received detailed feedback from a human reviewer who mentioned errors but also suggested optional improvements. For the second and larger project, the feedback was as good as before.</p>
<p>In addition to the "official" project feedback, a Udacity community was provided. In my case, it consisted only of other Bertelsmann scholarship recipients and Udacity mentors/moderators. I was skeptical at first but later used it as a quick way to ask questions about the projects or things unclear from the classes. The mentors usually responded within a day and solved all my issues.</p>
<h2 id="heading-6-outcome-amp-impact">6. Outcome &amp; Impact</h2>
<p>So far, I have not been able to put my newly learned skills to use. Nevertheless, knowing how a neural network works internally and knowing that one can take already trained networks to adjust them to other use-cases helps me keep my eyes open for opportunities. Additionally, since AI is a very common topic these days, it helps me distinguish what's possible and what's not.</p>
<p>Besides the main topic, I gained a lot of experience with Jupyter notebooks. This might be useful in the future, especially since Kotlin notebooks are now available as well.</p>
<p>Repeating a lot of math wasn't my favorite part of the class, but I'm sure that the repetition won't hurt 😅</p>
<h2 id="heading-7-summary-amp-recommendation">7. Summary &amp; Recommendation</h2>
<p>All in all, the nanodegree was quite challenging, but I am happy that I finished it. If you consider taking this class, you should be aware that it is not a general AI overview. It covers only neural networks, but those in-depth. If this is interesting for you or you plan to use a neural network soon, then this class might be well suited for you.</p>
<p>What I cannot judge is the pricing of the nanodegree, since I received a scholarship.</p>
]]></content:encoded></item><item><title><![CDATA[Kafka Streams - Lessons Learned]]></title><description><![CDATA[I recently had to work with Kafka Streams for the first time and, as usual, when encountering a new technology, made some mistakes. I want to share the learnings from working the first time with Kafka Streams with you.
Without getting too much into d...]]></description><link>https://blog.code-n-roll.dev/kafka-streams-lessons-learned</link><guid isPermaLink="true">https://blog.code-n-roll.dev/kafka-streams-lessons-learned</guid><category><![CDATA[kafka]]></category><category><![CDATA[Kafka streams]]></category><category><![CDATA[Kotlin]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Wed, 08 Nov 2023 13:49:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/bezLqq0HHCY/upload/7cccd9357c0374ab0cd009e9193155e4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently had to work with Kafka Streams for the first time and, as usual, when encountering a new technology, made some mistakes. I want to share the learnings from working the first time with Kafka Streams with you.</p>
<p>Without getting too much into details, I would like to sketch my use case. I had to consume a single topic which contained a lot of different user events. Based on a single type of event I had to collect all of them belonging to the same "resource" (e.g. a webpage). Having identified those, I had to bring them into a time-based order and see if a continuous interaction happened. If the interaction reached a certain threshold (e.g. the user spent a certain time on a webpage) I had to forward a single event but no more. I tried to sketch it in the following picture:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699428620705/703c64c3-3d07-419f-b4be-91492157ea51.png" alt class="image--center mx-auto" /></p>
<p>This is the event stream for a single user. The events arrive out of order and are mixed. I am only interested in the square events. Additionally, I have to wait until at least two of the same colour arrive, that's my threshold. More than two yellow squares should not lead to more output.</p>
<p>Since the requirement involved some grouping, my initial idea was to use a state store, but...</p>
<h2 id="heading-statestore-is-primarily-local">StateStore is primarily local</h2>
<p>You can use a <a target="_blank" href="https://docs.confluent.io/platform/current/streams/architecture.html#state">StateStore</a> to store data and compare it e.g. to other incoming events. The stores use a Kafka Topic (a topic with the <code>changelog</code> postfix) for fault tolerance and recreating the store in case of a restart (see also the good explanation <a target="_blank" href="https://stackoverflow.com/a/61554099">here</a>). <strong>But</strong> although this topic is replicated in the Kafka cluster, it's being filled primarily locally and stored on local disk. I started with this code:</p>
<pre><code class="lang-kotlin">        streamsBuilder
            .addStateStore(
                Stores.keyValueStoreBuilder(
                    Stores.persistentKeyValueStore(STATE_STORE_NAME),
                    Serdes.String(),
                    valueSerde
                )
            )
            .stream&lt;String, GenericRecord&gt;(topic)
            .filter { _, value -&gt; ... }
            .process(::MyProcessor, STATE_STORE_NAME)
</code></pre>
<p>But in my case it didn't work because I needed a global view of all the stored events. Therefore, a state store was the wrong choice. But, it wasn't only due to the local StateStore that this was problematic, but also due to my source. Before we talk about this, a note on load.</p>
<h2 id="heading-if-you-filter-incorrectly-you-might-end-up-duplicating-your-source-topic">If you filter incorrectly, you might end up duplicating your source topic</h2>
<p>As already mentioned, my application consumed a rather large topic and because I used the StateStore wrongly, we ended up nearly duplicating the incoming messages (the grouping that was part of the requirement did not work). The wrong grouping wasn't simply a coding error, there were integration tests for this. It was due to a logical error because I lacked Kafka knowledge and was directly connected to the next problem (message order). How could you avoid this? Well, I would say there is no general solution. Testing more thoroughly in Preprod would have at least avoided some problems in Prod.</p>
<h2 id="heading-check-your-sources-message-orderkey">Check your source's message order/key</h2>
<p>This was a big beginner's mistake. When Kafka distributes messages across partitions it does so based on the key. I knew this when we started but somehow assumed that the source topic uses a key so that all relevant messages arrive in the order I need (in this case, all messages for one user arrive in order in the same node). Assuming this was a huge mistake. Our source used random keys, i.e. all messages were distributed randomly across all replicas. The providing team's goal was to distribute the load evenly, not group messages.</p>
<p>After finding out about the upstream topic's key, there was no way this would change. The other team wouldn't and couldn't change their partitioning. Repartitioning with Kafka Streams is not a problem. We can use a group-by for this:</p>
<pre><code class="lang-kotlin">        streamsBuilder
            .stream&lt;String, GenericRecord&gt;(topic)
            .filter { _, value -&gt; ... }
            .groupBy({ _, value -&gt; <span class="hljs-string">"""<span class="hljs-subst">${value.some}</span><span class="hljs-subst">${value.thing}</span>"""</span> }, Grouped.`<span class="hljs-keyword">as</span>`(<span class="hljs-string">"group-by-something"</span>))
</code></pre>
<p>Still, ordering messages is problematic. Simply repartitioning does not bring the messages in order.</p>
<h2 id="heading-ordering-out-of-order-messages-is-hard">Ordering out-of-order messages is hard</h2>
<p>For my use case, I had two choices to get the messages back in order:</p>
<ol>
<li><p>Store the messages locally in a StateStore and push them out in order from time to time (see <code>ReorderIntegrationTest</code> in <a target="_blank" href="https://github.com/confluentinc/kafka-streams-examples/pull/411/files#diff-fd800eed8485bf0b272efbf670a3fa7e8609b5ba629da6434f946b41eb36666f">this PR</a>)</p>
</li>
<li><p>Use a <a target="_blank" href="https://developer.confluent.io/tutorials/create-session-windows/kstreams.html">session window</a> to group together events based on their timestamp</p>
</li>
</ol>
<p>Luckily, the session window perfectly fit my use case:</p>
<pre><code class="lang-kotlin">        streamsBuilder
            <span class="hljs-comment">// add the TimeStampExtractor</span>
            .stream&lt;String, GenericRecord&gt;(topic, Consumed.with(timestampExtractor))
            .filter { _, value -&gt; ... }
            .groupBy({ _, value -&gt; <span class="hljs-string">"""<span class="hljs-subst">${value.some}</span><span class="hljs-subst">${value.thing}</span>"""</span> }, Grouped.`<span class="hljs-keyword">as</span>`(<span class="hljs-string">"group-by-something"</span>))
            .windowedBy(
                <span class="hljs-comment">// Define the session window according to the use-case</span>
                SessionWindows.ofInactivityGapAndGrace(
                    Duration.ofSeconds(eventIntervalSeconds),
                    Duration.ofSeconds(gracePeriodSeconds)
                )
            )
            <span class="hljs-comment">// aggregate all related events in some intermediate format</span>
            .aggregate(
                <span class="hljs-comment">// initial</span>
                ::AggregationRoot,
                <span class="hljs-comment">// aggregation</span>
                { _, value, aggregationRoot-&gt;
                    AggregationRoot(
                        aggregationRoot.events + value
                    )
                },
                <span class="hljs-comment">// merge</span>
                { _, left, right-&gt;
                    left + right
                },
                Materialized.with(Serdes.String(), JsonSerde(...))
            )
            <span class="hljs-comment">// only pass on finished sessions, not intermediate results</span>
            .suppress(untilWindowCloses(unbounded()))
</code></pre>
<h2 id="heading-session-windows-end-when-a-message-for-a-new-window-arrives">Session windows end when a message for a new window arrives</h2>
<p>This was a little bit tricky to find out but could be replicated in an integration test. A session window consists of all events that are no more than a certain duration apart. When using session windows you can provide a <code>TimestampExtractor</code> so that Kafka Streams knows which timestamp to use for session assignment. But to know when a session ends, one event has to arrive which starts a new session. This is due to the fact that Kafka Streams doesn't use a wall clock internally but only relies on the event timestamps.</p>
<p>Now that I had a way to group and aggregate my events, I ran into another problem.</p>
<h2 id="heading-message-size-during-aggregation">Message size during aggregation</h2>
<p>When aggregating session windows and collecting all messages the session might grow considerably large. Setting <code>max.request.size</code> can help but considering that a session can grow as large as your typical user interaction, it is better to find a way to reduce the amount of messages somehow in either the aggregation or the merging. In my case, the merge of two sessions contained a lot of duplicates, so it helped to use a set.</p>
<h2 id="heading-final-thoughts">Final thoughts</h2>
<p>All the points I mentioned here were the problems that I encountered. Besides that, I want to state that after having fixed those, the Kafka Streams application runs smoothly and has a great performance. Also, the fluent stream definition is quite readable. Therefore, don't be afraid to use Kafka Streams if the use case fits.</p>
]]></content:encoded></item><item><title><![CDATA[Advent of Code 2022 Recap]]></title><description><![CDATA[Introduction
This year was the first one in which I participated in Advent of Code (AoC). I am not sure why I haven't noticed it before. Maybe I was not interested, since coding competitions are not related to my daily work. Additionally, competition...]]></description><link>https://blog.code-n-roll.dev/advent-of-code-2022-recap</link><guid isPermaLink="true">https://blog.code-n-roll.dev/advent-of-code-2022-recap</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[AdventOfCode2022]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Fri, 23 Dec 2022 19:30:52 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>This year was the first one in which I participated in <a target="_blank" href="https://adventofcode.com/">Advent of Code</a> (AoC). I am not sure why I haven't noticed it before. Maybe I was not interested, since coding competitions are not related to my daily work. Additionally, competitions like Google Code Jam require a lot of time and effort.<br />Nevertheless, my company started an internal competition this year with a private leaderboard, so I wanted to give it a try.</p>
<h2 id="heading-about-aoc">About AoC</h2>
<p>As I already mentioned, AoC is a coding competition. Each day you get a new challenge and unlock a part of the story (which was quite funny). According to <a target="_blank" href="https://en.wikipedia.org/wiki/Advent_of_Code">Wikipedia</a> the competition started in 2015. So, I am a laggard on the adaption curve it seems. What I couldn't figure out is why exactly <a target="_blank" href="https://twitter.com/ericwastl?s=20&amp;t=Im0vAYwDYIsghIMNo9zK-Q">Eric Wastl</a> started the whole thing and what qualifies him. Still, he does a good job.</p>
<h2 id="heading-my-participation">My participation</h2>
<p>It seems like some people use AoC to learn a new programming language. It seems like a good chance to try a new language on some "real" problems, rather than examples in the books. Though it seems like a good idea, I stuck to my favorite language these days, which is Kotlin. This also gave me a speed advantage in the competition, since I did not have to adjust to a new language. As a bonus, the <a target="_blank" href="https://www.youtube.com/playlist?list=PLlFc5cFwUnmwxQlKf8uWp-la8BVSTH47J">JetBrains people also discussed all challenges up to day 12</a>. So I could compare my solution to the ones presented there.</p>
<p>What I gave up pretty quickly was a place on the global leaderboard. As I mentioned, I don't know about the previous years, but AI has proven to be a big advantage. After some Twitter research I could see that at least for the first few days AI spit out perfect solutions within minutes. On the one hand that is pretty interesting regarding the advances in AI technology but on the other hand, it destroys the competition. My main goal was to reach first place on my company's leaderboard.</p>
<p>Honestly, I had quite a good run until day 16. I was even placed first for a short period of time. Day 16 proved to be a challenge I could not solve easily. Of course, there is the <a target="_blank" href="https://www.reddit.com/r/adventofcode/">Reddit board</a> and checking GitHub is also possible. Still, I wanted to find the solution myself. Of course, as in many coding competitions, AoC boils down to algorithms. While a simple solution works in the first few days, the later challenges are too complex for simple solutions. Repeating things like <a target="_blank" href="https://en.wikipedia.org/wiki/Breadth-first_search">BFS</a> is alright, and I also learned about the <a target="_blank" href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem">Chinese remainder theorem</a> but when things got really difficult I had to give up.<br />I think many people will agree with me that their time is limited. Hacking one approach for one or two hours is fun. Realizing that it doesn't work out is pretty bad because a rewrite would take nearly the same amount of time (with no guarantee that it'll work this time). While that's a blocker for me others might enjoy the additional challenge.</p>
<p>Nevertheless, although I had to give up, the overall experience was quite fun. The overarching story made me smile and I could repeat some algorithms. Additionally, I learned one or two things.</p>
<h2 id="heading-things-i-learnedre-discovery">Things I learned/re-discovery</h2>
<p>Just as a short list:</p>
<ul>
<li><p>chunked() on a (Kotlin) collection</p>
</li>
<li><p>I don't wanna miss Kotlin ranges</p>
</li>
<li><p>windowed() on a (Kotlin) collection</p>
</li>
<li><p>I hate exercises involving two-dimensional arrays (but became better over time ^^)</p>
</li>
<li><p>Chinese remainder theorem</p>
</li>
<li><p>when it comes to code something fast I tend to rely on imperative programming (for loop, break, continue, vars) not functional one</p>
</li>
<li><p>seeing other people code also helped, sometimes they wrote some beautiful solutions</p>
</li>
</ul>
<h2 id="heading-summary">Summary</h2>
<p>Next year I will participate again, for sure. Getting up at 6 a.m. German time isn't too bad. And my colleagues should be prepared for me to keep up until the end ;)</p>
]]></content:encoded></item><item><title><![CDATA[Getting started with Spring and Coroutines - Part 3]]></title><description><![CDATA[Even Coroutines have to be taken with a grain of salt. This article is about the things that will not work when you go reactive in Spring. Though Spring is trying to support Kotlin features as good as possible it still lacks some.
Initially, I was pl...]]></description><link>https://blog.code-n-roll.dev/getting-started-with-spring-and-coroutines-part-3</link><guid isPermaLink="true">https://blog.code-n-roll.dev/getting-started-with-spring-and-coroutines-part-3</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[Spring]]></category><category><![CDATA[software development]]></category><category><![CDATA[Reactive Programming]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Thu, 13 Oct 2022 15:44:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1665509385099/QpKtNCPXI.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Even Coroutines have to be taken with a grain of salt. This article is about the things that will not work when you go reactive in Spring. Though Spring is trying to support Kotlin features as good as possible it still lacks some.</p>
<p>Initially, I was planning to give you some specific examples of things that do not work. Like this annotation or that one. But while doing my research I came to a different conclusion. It is not about specifics that do not work well with Coroutines. I came to a general conclusion:</p>
<blockquote>
<p>Expect that anything that uses annotation magic does not work</p>
</blockquote>
<p>There might be exceptions, since I am pretty sure that I do not know every single annotation, but regarding the ones I use most often, nothing works with Coroutines out of the box.</p>
<p>Let's take a look at some examples.</p>
<h2 id="heading-cacheable">Cacheable</h2>
<p>If you want to annotate a suspend function with <code>@Cacheable</code> you will be quite disappointed. Due to the fact that the Kotlin compiler will modify the method signature and add a <a target="_blank" href="https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.coroutines/-continuation/">Continuation</a> parameter. Basically, the <code>Continuation</code> will mess with the caching interceptor. There is a pretty good <a target="_blank" href="https://stackoverflow.com/questions/64372602/how-to-use-cacheable-with-kotlin-suspend-funcion">SoF answer</a> regarding this.</p>
<h3 id="heading-workaround">Workaround</h3>
<p>The general solution for all incompatibilities presented here is to go back to the basics, i.e., we code manually what the annotations do for us automagically. In the case of the <code>@Cacheable</code> annotation we must use the <code>CacheManager</code> ourselves. E.g., when using Caffeine, we need a <code>CacheManager</code> bean:</p>
<pre><code class="lang-kotlin">    <span class="hljs-meta">@Bean</span>
    <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">cacheManager</span><span class="hljs-params">(caffeine: <span class="hljs-type">Caffeine</span>&lt;<span class="hljs-type">Any</span>, Any&gt;)</span></span>: CacheManager =
        CaffeineCacheManager().apply {
            setCaffeine(Caffeine.newBuilder().maximumSize(<span class="hljs-number">10_000</span>))
        }
</code></pre>
<p>And then we can use the Bean to create Caches when needed:</p>
<pre><code class="lang-kotlin">    <span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> cache: Cache = cacheManager.getCache(<span class="hljs-string">"cacheName"</span>)!!
</code></pre>
<p>Lastly, we check presence and insert into the  cache when necessary:</p>
<pre><code class="lang-kotlin">        <span class="hljs-keyword">return</span> <span class="hljs-keyword">if</span> (cache[key] != <span class="hljs-literal">null</span>) {
            cache.<span class="hljs-keyword">get</span>(key, MyClass::<span class="hljs-keyword">class</span>.java)
        } <span class="hljs-keyword">else</span> {
            ...
            cache.put(key, myClass)
        }
</code></pre>
<p>Additionally, as you might have read in the SoF answer, you could also fall back to using a <code>Deferred</code> as return value. But using a <code>Deferred</code> here might bring other problems since we are no longer invoking a method but rather starting a job.</p>
<h2 id="heading-transactional">Transactional</h2>
<p>Currently, if you do not happen to use the R2DB driver, there is no <code>ReactiveTransactionManager</code> in your application autoconfigured. This means that reactive methods, like suspend functions, cannot be used with the <code>@Transactional</code> annotation.</p>
<p>As far as I know, it is problematic to stretch a transaction across several threads. And in case of suspend functions, you might switch the thread several times.</p>
<h3 id="heading-workaround">Workaround</h3>
<p>If you want to stick to suspend functions, you will have to use the <code>TransactionTemplate</code>. You should be able to inject an <code>org.springframework.transaction.PlatformTransactionManager</code> and then create a <code>TransactionTemplate</code>:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> transactionTemplate = TransactionTemplate(platformTransactionManager)
</code></pre>
<p>You can then call the <code>executeWithoutResult</code> or <code>execute</code> methods. But be careful. I did not test this. You should avoid calling other suspend functions without <code>runBlocking</code>. The way I see it, everything you do within one of those two execute functions should be non-reactive due to the reason mentioned above.</p>
<h2 id="heading-circuitbreaker">CircuitBreaker</h2>
<p>As you can see from <a target="_blank" href="https://github.com/resilience4j/resilience4j/issues/1539">this issue</a> (effective 11.10.2022) there is no support for the <code>@CircuitBreaker</code> annotation. Nevertheless, Resilience4J supports Kotlin Coroutines.</p>
<h3 id="heading-workaround">Workaround</h3>
<p>As you can read in the <a target="_blank" href="https://resilience4j.readme.io/docs/getting-started-4#usage---suspending-functions">documentation</a> you may execute or decorate the suspend functions. Something like this should work:</p>
<pre><code class="lang-kotlin">    <span class="hljs-keyword">private</span> <span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-type">&lt;T&gt;</span> <span class="hljs-title">makeCall</span><span class="hljs-params">(yourLambda: <span class="hljs-type">suspend</span> () -&gt; <span class="hljs-type">T</span>)</span></span>: T =
            circuitBreaker.executeSuspendFunction {
                timeLimiter.executeSuspendFunction(yourLambda)
            }
</code></pre>
<h2 id="heading-feignclient">FeignClient</h2>
<p>If you are a fan of the declarative FeignClient I have to disappoint you, too. There is currently no support for reactive methods.</p>
<h3 id="heading-workaround">Workaround</h3>
<p>There is a community project called "Feign Reactive" which you can find <a target="_blank" href="https://github.com/PlaytikaOSS/feign-reactive">here</a>. It allows you to use <code>Mono</code> or <code>Flux</code> as return types. You could then transform those into Coroutines by using the extension methods from <code>kotlinx-coroutines-reactor</code> like <code>await()</code>.</p>
<h2 id="heading-bean">Bean</h2>
<p>Not even the <code>@Bean</code> annotation works with suspend functions. Somehow, the inserted <code>Continuation</code> also messes with Spring's <code>ConstructorResolver</code>.</p>
<p>Okay, this is quite an artificial example since there should be no suspend functions involved when creating a bean. Still, it shows that there are some problems left to be solved.</p>
<h2 id="heading-final-remarks">Final remarks</h2>
<p>As always with new technologies the surrounding ecosystem must catch up. Though, Kotlin and its Coroutines aren't that new, a framework as big a Spring cannot change quickly.</p>
<p>There was a project that tried to overcome the limitations but it hasn't received any updates in a while (https://github.com/konrad-kaminski/spring-kotlin-coroutine). So, I suppose it is EOL.</p>
<p>Personally, I hope that with Spring 6 and Spring Boot 3 there will be better support out-of-the-box for all the things I have shown in this article. We will know more at the end of the year.</p>
]]></content:encoded></item><item><title><![CDATA[Book Review: Effective Kotlin by Marcin Moskala]]></title><description><![CDATA[Introduction
Today, I want to present to you the book "Effective Kotlin" by Marcin Moskala. A friend and former colleague of mine recommended it to me (thank you Patrick). I must admit that I was skeptical after looking at the book before ordering it...]]></description><link>https://blog.code-n-roll.dev/book-review-effective-kotlin-by-marcin-moskala</link><guid isPermaLink="true">https://blog.code-n-roll.dev/book-review-effective-kotlin-by-marcin-moskala</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[#BookReview]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Mon, 26 Sep 2022 17:15:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1663781928938/kuXTxvDyN.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Today, I want to present to you the book "Effective Kotlin" by Marcin Moskala. A friend and former colleague of mine recommended it to me (thank you Patrick). I must admit that I was skeptical after looking at the book before ordering it. Some time ago I stopped reading books that are solely on a "code" level. Programming languages, their compilers or even what is considered best practice (looking at you Scala 2) change so rapidly these days that such books are obsolete a year or two after publication. There are some evergreens of course, e.g., Java Concurrency in Practice, but those are rare in my opinion. "Effective Kotlin" was published in November 2019. According to my definition this means it is already old (for comparison November 2019 <a target="_blank" href="https://github.com/JetBrains/kotlin/releases/tag/v1.3.60">Kotlin 1.3.60</a> was released, today we use <a target="_blank" href="https://github.com/JetBrains/kotlin/releases/tag/v1.7.10">Kotlin 1.7.10</a>).</p>
<p>Still, since it was a recommendation, I gave it a try. </p>
<h1 id="heading-the-strucuture">The strucuture</h1>
<p>The book contains different levels of recommendations. Some cover basics that are not related to Kotlin (e.g., "Design for Readability") and others are very specific details (e.g., "Avoid member extensions"). Additionally, chapters covering general aspects are labeled for better orientation. The book is separated into three parts that center around a certain aspect. Within each part there are several chapters. The chapters within each part get more complicated. This helps when one has to look up some specific advice or searches for a code example.</p>
<h1 id="heading-the-content">The content</h1>
<p>As already mentioned, the information varies from OOP best practices to very Kotlin specific aspects. Every chapter contains code examples illustrating what is being presented. The examples are easy to understand since they are written in a readable way. I read the printed edition; therefore, the code is well formatted. Some online reviews mention that the formatting is not good when using an e-book reader.</p>
<p>The code is mostly unrelated to a specific platform. From time-to-time specific aspects are pointed out that become relevant when one is not using Kotlin on the JVM. Additionally, some examples show Android code. Nevertheless, one does not need to know the Android platform.</p>
<h1 id="heading-how-did-the-book-help-me-during-my-daily-work">How did the book help me during my daily work?</h1>
<p>What I will try sometime soon is to introduce more inline value classes (the ones with the <code>JvmInline</code> annotation and the <code>value</code> keyword).
Additionally, what I pay more attention  to after having read the book is the usage of sequences when chaining many <code>map</code>, <code>flatMap</code>, <code>filter</code>, etc. transformations.</p>
<p>But the real value is not that I learned this certain pattern or that method, it is more that I have a reference book to check when I get the feeling that there should be a way to write some better code. Each time I get a feeling that there should be a better way I can grab the book and quickly check.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>All in all, I have to admit that the book is still relevant these days and that I learned quite a lot. I am not sure if it will become one of those evergreen books, but I would recommend you give it a try.</p>
<p>If Marcin Moskala ever reads this article, here is my recommendation for the next edition: Add an alphabetical index to the end of the book. It would be much easier to search for certain things by looking up a keyword. Actually, that this is missing, is the biggest weakness of the book.  </p>
]]></content:encoded></item><item><title><![CDATA[Getting started with Spring and Coroutines - Part 2]]></title><description><![CDATA[This is the second part of my short series about Spring and Kotlin coroutines. In the first part we learned that we can stay reactive from the controller to the repository. The only requirement was that our database driver is also reactive. When usin...]]></description><link>https://blog.code-n-roll.dev/getting-started-with-spring-and-coroutines-part-2</link><guid isPermaLink="true">https://blog.code-n-roll.dev/getting-started-with-spring-and-coroutines-part-2</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[concurrency]]></category><category><![CDATA[software development]]></category><category><![CDATA[Spring]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Sun, 07 Aug 2022 08:43:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1659632983172/FbpxD4pwV.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the second part of my short series about Spring and Kotlin coroutines. In the <a target="_blank" href="https://blog.code-n-roll.dev/getting-started-with-spring-and-coroutines-part-1">first part</a> we learned that we can stay reactive from the controller to the repository. The only requirement was that our database driver is also reactive. When using NoSQL databases, we often have the choice of using a reactive driver. Nevertheless, many applications use Hibernate/JDBC, which does not provide a reactive driver, yet. I know that R2DBC exists but it's still not stable and it is no official driver. Luckily, Kotlin provides us with ways to bridge the gap.</p>
<h2 id="heading-using-a-non-reactive-database">Using a non-reactive database</h2>
<p>Basically, I use the same example as in the previous article. There is a controller, a repository, and an entity. To better highlight the difference, I also introduced a service class.</p>
<h3 id="heading-the-repository">The repository</h3>
<p>The repository is as expected:</p>
<pre><code class="lang-kotlin"><span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">EntityRepository</span>: <span class="hljs-type">JpaRepository</span>&lt;<span class="hljs-type">TestEntity, UUID</span>&gt;</span>
</code></pre>
<p>A simple <code>JpaRepository</code>. No additional methods required.</p>
<h3 id="heading-the-controller">The controller</h3>
<p>Basically, the controller stayed the same:</p>
<pre><code class="lang-kotlin"><span class="hljs-meta">@RestController</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestController</span></span>(<span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> service: TestService) {

    <span class="hljs-meta">@GetMapping(path = [<span class="hljs-meta-string">"/entity/{id}"</span>])</span>
    <span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">getEntity</span><span class="hljs-params">(<span class="hljs-meta">@PathVariable</span> id: <span class="hljs-type">UUID</span>)</span></span> = service.findById(id)

}
</code></pre>
<p>The only difference is that we use the service and do not call the repository directly.</p>
<h3 id="heading-the-service">The service</h3>
<p>Let's start with the way we shouldn't do it:</p>
<pre><code class="lang-kotlin"><span class="hljs-meta">@Service</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestService</span></span>(<span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> repository: EntityRepository) {
    <span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">findById</span><span class="hljs-params">(id: <span class="hljs-type">UUID</span>)</span></span>: TestEntity? = repository.findById(id).orElse(<span class="hljs-literal">null</span>)
}
</code></pre>
<p>If we try this our code would compile and our tests would pass. So, what's the problem you ask? IntelliJ shows a warning:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659501122067/PK3a2m_Mz.png" alt="coroutineWarning.png" /></p>
<p>"Thread starvation", interesting, but what does this mean? We can find a good definition in the <a target="_blank" href="https://docs.oracle.com/javase/tutorial/essential/concurrency/starvelive.html">Java SE documentation</a>:</p>
<blockquote>
<p>Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress</p>
</blockquote>
<p>Our coroutine calls a blocking part of the code. Because it has to block and wait for the (database) response the thread cannot perform other work. If the database never answers, the thread is always occupied and the thread-pool, where our thread originated from, lost one thread.
What's even worse is that we lose our coroutine advantage. Coroutines should be non-blocking. When they wait for something, they are supposed to return the thread and wait in the background (simply put).</p>
<p>Let's take a look at the better solution.</p>
<p>IntelliJ already told us what we have to do. After applying the quick fix our service looks like this:</p>
<pre><code class="lang-kotlin"><span class="hljs-meta">@Service</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestService</span></span>(<span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> repository: EntityRepository) {
    <span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">findById</span><span class="hljs-params">(id: <span class="hljs-type">UUID</span>)</span></span>: TestEntity? {
        <span class="hljs-keyword">return</span> withContext(Dispatchers.IO) {
            repository.findById(id)
        }.orElse(<span class="hljs-literal">null</span>)
    }
}
</code></pre>
<p>But what does this do? We can add some logging (I like to use <code>io.github.microutils:kotlin-logging</code>) to get some insights. The important part here is the thread name. Therefore, I had to adjust the logback layout a little.</p>
<p> When we use the non-optimal approach, we'll see this output in the service's <code>findById()</code> method:</p>
<pre><code><span class="hljs-attribute">2022</span>-<span class="hljs-number">07</span>-<span class="hljs-number">14</span> | <span class="hljs-number">06</span>:<span class="hljs-number">26</span>:<span class="hljs-number">09</span>.<span class="hljs-number">967</span> | reactor-http-nio-<span class="hljs-number">4</span> @coroutine#<span class="hljs-number">1</span> |  INFO | d.c.c.TestService | Hello from findById
</code></pre><p>The important part is the third column. Our code is running in a reactor thread on coroutine number one.
When using the suggested solution we get the following output:</p>
<pre><code><span class="hljs-attribute">2022</span>-<span class="hljs-number">07</span>-<span class="hljs-number">14</span> | <span class="hljs-number">06</span>:<span class="hljs-number">25</span>:<span class="hljs-number">05</span>.<span class="hljs-number">536</span> | reactor-http-nio-<span class="hljs-number">4</span> @coroutine#<span class="hljs-number">1</span> |  INFO | d.c.c.TestService | Hello from findById outside context
<span class="hljs-attribute">2022</span>-<span class="hljs-number">07</span>-<span class="hljs-number">14</span> | <span class="hljs-number">06</span>:<span class="hljs-number">25</span>:<span class="hljs-number">05</span>.<span class="hljs-number">544</span> | DefaultDispatcher-worker-<span class="hljs-number">1</span> @coroutine#<span class="hljs-number">1</span> |  INFO | d.c.c.TestService | Hello from findById inside context
</code></pre><p>As we can see, by switching the context, we switched the threadpool and the coroutine. By calling <code>withContext</code> we "returned" the Reactor coroutine. It is now free to handle other incoming HTTP requests. Our blocking code now uses a thread from the IO Dispatcher whose purpose it is to handle blocking calls. You may wonder why the <code>DefaultDispatcher</code> appears though we used <code>Dispatchers.IO</code>. We can find the solution in the <code>Dispatcher.IO</code> <a target="_blank" href="https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/-dispatchers/-i-o.html">Javadoc</a>:</p>
<blockquote>
<p>This dispatcher and its views share threads with the Default dispatcher, [...]</p>
</blockquote>
<p>Now that we use the new context our tests will still pass. Nevertheless, we now have a coroutine optimal solution.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Our main conclusion is that we should do what IntelliJ demands :-D JetBrains, being the company behind IntelliJ <strong>and</strong> Kotlin obviously knows what works best and what doesn't. Besides that, we can see that switching from non-blocking to blocking code is not that difficult with coroutines within Spring. Finally, stay tuned for my next article featuring coroutines.</p>
<p>As usual, you can find the complete example <a target="_blank" href="https://github.com/rbraeunlich/coroutines-2">on GitHub</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Getting started with Spring and Coroutines - Part 1]]></title><description><![CDATA[What I have seen several times among my colleagues and myself is that Coroutines are often regarded as an interesting topic, but developers are quite reluctant to use them. I suppose it is a mixture of the learning curve and bad experiences with Java...]]></description><link>https://blog.code-n-roll.dev/getting-started-with-spring-and-coroutines-part-1</link><guid isPermaLink="true">https://blog.code-n-roll.dev/getting-started-with-spring-and-coroutines-part-1</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[concurrency]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Fri, 08 Jul 2022 12:50:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1656934538196/91Ldg1NWO.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What I have seen several times among my colleagues and myself is that Coroutines are often regarded as an interesting topic, but developers are quite reluctant to use them. I suppose it is a mixture of the learning curve and bad experiences with Java threads, since most of us were Java developers. Or as Matt Wynne once wrote <a target="_blank" href="https://twitter.com/mattwynne/status/468404450710024192">on Twitter</a>:</p>
<blockquote>
<p>I had a problem, so I decided to use threads. tNwoowp rIo bhlaevmes.</p>
</blockquote>
<p>Still, it is surprisingly easy to get started with Coroutines in Spring. When using Webflux, we get Coroutine support. One disclaimer about Coroutines has to be mentioned here: Coroutines won't make our code automagically thread safe. We still have to consider possible concurrency issues. Anyways, working with Coroutines, the code looks more familiar than when working e.g., with Webflux/Reactor because the code looks like sequential code and is not cluttered with all the <code>map()</code>, <code>flatMap()</code>, <code>switchIfEmpty()</code>.... calls.</p>
<p>To encourage the usage of Coroutines I decided to write some small blog posts, highlighting my experiences. Additionally, in my projects, I could see a substantial performance boost after switching to Coroutines. In this part we will cover the easiest case, staying reactive from the controller to the repository.</p>
<h2 id="heading-reactive-from-the-controller-to-the-repository">Reactive from the controller to the repository</h2>
<p>The easiest way to get started with Coroutines in Spring is when you have <code>spring-boot-starter-webflux</code> in your classpath and your database driver is also reactive. In that case most of the work is just adding the <code>suspend</code> keyword. But let's take a look at an example.</p>
<h3 id="heading-the-restcontroller">The RestController</h3>
<p>The controller is nothing special, it just needs the <code>suspend</code> keyword:</p>
<pre><code class="lang-kotlin"><span class="hljs-meta">@RestController</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">TestController</span></span>(<span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> repository: EntityRepository) {

    <span class="hljs-meta">@GetMapping(path = [<span class="hljs-meta-string">"/entity/{id}"</span>])</span>
    <span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">getEntity</span><span class="hljs-params">(<span class="hljs-meta">@PathVariable</span> id: <span class="hljs-type">String</span>)</span></span> = repository.findById(ObjectId(id))

}
</code></pre>
<p>When you set a breakpoint in the method you can see that Spring, using Reactor, started a Coroutine for us. We do not have to handle any contexts. </p>
<p>To keep the example short, I did not create a <code>Service</code> class. So next, let's take a look at the repository.</p>
<h3 id="heading-the-repository">The repository</h3>
<p>The repository looks slightly different than usual:</p>
<pre><code class="lang-kotlin"><span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">EntityRepository</span>: <span class="hljs-type">CoroutineCrudRepository</span>&lt;<span class="hljs-type">TestEntity, ObjectId</span>&gt;</span>
</code></pre>
<p>Spring introduced a <code>CoroutineCrudRepository</code> some time ago, in which all methods are <code>suspend</code> functions. In my example I use the reactive MongoDB driver. Therefore, everything is reactive by default and I do not have to add any additional configuration. Furthermore, my code looks like the usual sequential code one is used to read.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>As you can see from the code, we hardly recognize that we are using Coroutines (still, keep in mind that you code must be thread-safe!). Of course, the example is very simple and nowhere close to the complexity of a real-world project. Still, it is a starting point. In the following article we will see how we can bridge the gap to a non-reactive database driver.</p>
<p>You can find the complete example <a target="_blank" href="https://github.com/rbraeunlich/coroutines-1">on GitHub</a>. </p>
]]></content:encoded></item><item><title><![CDATA[Better integration tests with WireMock]]></title><description><![CDATA[No matter if you follow the classical test pyramid or one of the newer approaches like the Testing Honeycomb you should start writing integration tests at some point during development.There are different types of integration tests you can write. Sta...]]></description><link>https://blog.code-n-roll.dev/better-integration-tests-with-wiremock</link><guid isPermaLink="true">https://blog.code-n-roll.dev/better-integration-tests-with-wiremock</guid><category><![CDATA[Java]]></category><category><![CDATA[clean code]]></category><category><![CDATA[Spring]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Springboot]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Tue, 19 Nov 2019 18:08:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1659430279798/GVLIZFTLh.jfif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>No matter if you follow the classical test pyramid or one of the newer approaches like the <a target="_blank" href="https://labs.spotify.com/2018/01/11/testing-of-microservices/">Testing Honeycomb</a> you should start writing integration tests at some point during development.<br />There are different types of integration tests you can write. Starting with persistence tests, you can check the interaction between your components or you can simulate calling external services. This article will be about the latter case.<br />Let us start with a motivating example before talking about WireMock.  </p>
<h2 id="heading-the-chucknorrisfact-service">The ChuckNorrisFact service</h2>
<p>The complete example can be found on <a target="_blank" href="https://github.com/rbraeunlich/wiremock-integrationtest">GitHub</a>.<br />You might have seen me using the <a target="_blank" href="http://www.icndb.com/api/">Chuck Norris fact API</a> in a <a target="_blank" href="https://wrongtracks.blogspot.com/2018/05/improve-your-test-structure-with.html">previous blog post</a>. The API will serve us as an example for another service that our implementation depends on.<br />We have a simple <code>ChuckNorrisFactController</code> as the API for manual testing. Next to the “business” classes there is the <code>ChuckNorrisService</code> that does the call to the external API. It uses Spring’s <code>RestTemplate</code>. Nothing special.<br />What I have seen many times are tests that mock the RestTemplate and return some pre-canned answer. The implementation could look like this:  </p>
<pre><code class="lang-java"><span class="hljs-meta">@Service</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">ChuckNorrisService</span></span>{
...
  <span class="hljs-function"><span class="hljs-keyword">public</span> ChuckNorrisFact <span class="hljs-title">retrieveFact</span><span class="hljs-params">()</span> </span>{
    ResponseEntity&lt;ChuckNorrisFactResponse&gt; response = restTemplate.getForEntity(url, ChuckNorrisFactResponse.class);
    <span class="hljs-keyword">return</span> Optional.ofNullable(response.getBody()).map(ChuckNorrisFactResponse::getFact).orElse(BACKUP_FACT);
  }
 ...
 }
</code></pre>
<p>Next to the usual unit tests checking for the success cases there would be at least one test covering the error case, i.e. a 4xx or 5xx status code:  </p>
<pre><code class="lang-java">  <span class="hljs-meta">@Test</span>
  <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">shouldReturnBackupFactInCaseOfError</span><span class="hljs-params">()</span> </span>{
    String url = <span class="hljs-string">"http://localhost:8080"</span>;
    RestTemplate mockTemplate = mock(RestTemplate.class);
    ResponseEntity&lt;ChuckNorrisFactResponse&gt; responseEntity = <span class="hljs-keyword">new</span> ResponseEntity&lt;&gt;(HttpStatus.SERVICE_UNAVAILABLE);
    when(mockTemplate.getForEntity(url, ChuckNorrisFactResponse.class)).thenReturn(responseEntity);
    <span class="hljs-keyword">var</span> service = <span class="hljs-keyword">new</span> ChuckNorrisService(mockTemplate, url);

    ChuckNorrisFact retrieved = service.retrieveFact();

    assertThat(retrieved).isEqualTo(ChuckNorrisService.BACKUP_FACT);
  }
</code></pre>
<p>Doesn’t look bad, right? The response entity returns a 503 error code and our service will not crash. All tests are green and we can deploy our application.<br />Unfortunately, Spring’s RestTemplate does not work like this. The method signature of <code>getForEntity</code> gives us a very small hint. It states <code>throws RestClientException</code>. And this is where the mocked RestTemplate differs from the actual implementation. We will never receive a <code>ResponseEntity</code> with a 4xx or 5xx status code. The RestTemplate will throw a subclass of <code>RestClientException</code>. Looking at the class hierarchy we can get a good impression of what could be thrown:  </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659344208597/_KC4CyAmS.png" alt="classHierarchy.png" /></p>
<p>Therefore, lets see how we can make this test better.  </p>
<h2 id="heading-wiremock-to-the-rescue">WireMock to the rescue</h2>
<p>WireMock simulates web services by starting a mock server and returning answers you configured it to return. It is easy to integrate into your tests and mocking requests is also simple thanks to a nice DSL.<br />For JUnit 4 there is a <code>WireMockRule</code> that helps with starting an stopping the server. For JUnit 5 you will have to do it yourself. When you check the example project you can find the <code>ChuckNorrisServiceIntegrationTest</code>. It is a SpringBoot test based on JUnit 4. Let’s take a look at it.<br />The most important part is the <code>ClassRule</code>:  </p>
<pre><code class="lang-java">      <span class="hljs-meta">@ClassRule</span>  <span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> WireMockRule wireMockRule = <span class="hljs-keyword">new</span> WireMockRule();
</code></pre>
<p>As mentioned before, this will start and stop the WireMock server. You could also use the rule as normal <code>Rule</code> to start and stop the server for each test. For our test this isn’t necessary.<br />Next, you can see several <code>configureWireMockFor...</code> methods. These contain the instructions for WireMock when to return what answer. Splitting the WireMock configuration into several methods and calling them from the tests is my approach to using WireMock. Of course you could set up all possbile requests in an <code>@Before</code> method. For the success case we do:  </p>
<pre><code class="lang-java">  <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">configureWireMockForOkResponse</span><span class="hljs-params">(ChuckNorrisFact fact)</span> <span class="hljs-keyword">throws</span> JsonProcessingException </span>{
    ChuckNorrisFactResponse chuckNorrisFactResponse = <span class="hljs-keyword">new</span> ChuckNorrisFactResponse(<span class="hljs-string">"success"</span>, fact);
    stubFor(get(urlEqualTo(<span class="hljs-string">"/jokes/random"</span>))
        .willReturn(okJson(OBJECT_MAPPER.writeValueAsString(chuckNorrisFactResponse))));
  }
</code></pre>
<p>All methods are imported statically from <code>com.github.tomakehurst.wiremock.client.WireMock</code>. As you can see, we stub an HTTP GET to a path <code>/jokes/random</code> and return a JSON object. The <code>okJson()</code> method is just shorthand for a 200 response with JSON content. For the error case the code is even more simple:  </p>
<pre><code class="lang-java">  <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">void</span> <span class="hljs-title">configureWireMockForErrorResponse</span><span class="hljs-params">()</span> </span>{
    stubFor(get(urlEqualTo(<span class="hljs-string">"/jokes/random"</span>))
        .willReturn(serverError()));
  }
</code></pre>
<p>As you can see, the DSL makes it easy to read the instructions.<br />Having WireMock in place we can see that our previous implementation does not work since the RestTemplate throws an exception. Therefore, we gotta adjust our code:  </p>
<pre><code class="lang-java">  <span class="hljs-function"><span class="hljs-keyword">public</span> ChuckNorrisFact <span class="hljs-title">retrieveFact</span><span class="hljs-params">()</span> </span>{
    <span class="hljs-keyword">try</span> {
      ResponseEntity&lt;ChuckNorrisFactResponse&gt; response = restTemplate.getForEntity(url, ChuckNorrisFactResponse.class);
      <span class="hljs-keyword">return</span> Optional.ofNullable(response.getBody()).map(ChuckNorrisFactResponse::getFact).orElse(BACKUP_FACT);
    } <span class="hljs-keyword">catch</span> (HttpStatusCodeException e){
      <span class="hljs-keyword">return</span> BACKUP_FACT;
    }
  }
</code></pre>
<p>This already covers WireMock’s basic use-cases. Configure an answer for a request, execute the test, check the results. It’s as simple as that.<br />Still, there is one problem you will usually encounter when you run your tests in a cloud environment. Let’s see what we can do.  </p>
<h3 id="heading-wiremock-on-a-dynamic-port">WireMock on a dynamic port</h3>
<p>You might have noticed that the integration test in the project contains an <code>ApplicationContextInitializer</code> class and that its <code>@TestPropertySource</code> annotation overwrites the URL of the actual API. That is because I wanted to start WireMock on a random port. Of course you can configure a fixed port for WireMock and use this one as hard-coded value in your tests. But if your tests are running on some cloud providers infrastructure you cannot be sure that the port is free. Therefore, I think a random port is better.<br />Still, when using properties in a Spring application we have to pass the random port somehow to our service. Or, as you can see in the example, overwrite the URL. That is why we use the <code>ApplicationContextInitializer</code>. We add the dynamically assigned port to the application context and then we can refer to it by using the property <code>${wiremock.port}</code>. The only disadvantage here is that we now have to use a ClassRule. Else, we couldn’t access the port before the Spring application is being initialized.<br />Having solved this problem, let’s take a look at one common problem when it comes to HTTP calls.  </p>
<h3 id="heading-timeouts">Timeouts</h3>
<p>WireMock offers many more possibilities for responses than just simple answers to GET requests. Another test case that is often forgotten is testing timeouts. Developers tend to forget to set timeouts on the <code>RestTemplate</code> or even on <code>URLConnections</code>. Without timeouts both will wait for an infinite amount of time for responses. In the best case you will not notice, in the worst case all your threads wait for a response that will never arrive.<br />Therefore, we should add a test that simulates a timeout. Of course, we can also create a delay with e.g. a Mockito mock, but in that case we would guess again how the RestTemplate behaves. Simulating a delay with WireMock is pretty easy:  </p>
<pre><code class="lang-java">  <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">void</span> <span class="hljs-title">configureWireMockForSlowResponse</span><span class="hljs-params">()</span> <span class="hljs-keyword">throws</span> JsonProcessingException </span>{
    ChuckNorrisFactResponse chuckNorrisFactResponse = <span class="hljs-keyword">new</span> ChuckNorrisFactResponse(<span class="hljs-string">"success"</span>, <span class="hljs-keyword">new</span> ChuckNorrisFact(<span class="hljs-number">1L</span>, <span class="hljs-string">""</span>));
    stubFor(get(urlEqualTo(<span class="hljs-string">"/jokes/random"</span>))
        .willReturn(
            okJson(OBJECT_MAPPER.writeValueAsString(chuckNorrisFactResponse))
                .withFixedDelay((<span class="hljs-keyword">int</span>) Duration.ofSeconds(<span class="hljs-number">10L</span>).toMillis())));
  }
</code></pre>
<p><code>withFixedDelay()</code> expects an int value representing milliseconds. I prefer using Duration or at least a constant that indicates that the parameter represents milliseconds without having to read the JavaDoc every time.<br />After setting a timeout on our <code>RestTemplate</code> and adding the test for the slow response we can see that the RestTemplate throws a <code>ResourceAccessException</code>. So we can either adjust the catch block to catch this exception and the <code>HttpStatusCodeException</code> or just catch the superclass of both:  </p>
<pre><code>  <span class="hljs-keyword">public</span> ChuckNorrisFact retrieveFact() {
    <span class="hljs-keyword">try</span> {
      ResponseEntity<span class="hljs-operator">&lt;</span>ChuckNorrisFactResponse<span class="hljs-operator">&gt;</span> response <span class="hljs-operator">=</span> restTemplate.getForEntity(url, ChuckNorrisFactResponse.class);
      <span class="hljs-keyword">return</span> Optional.ofNullable(response.getBody()).map(ChuckNorrisFactResponse::getFact).orElse(BACKUP_FACT);
    } <span class="hljs-keyword">catch</span> (RestClientException e){
      <span class="hljs-keyword">return</span> BACKUP_FACT;
    }
  }
</code></pre><p>Now we have nicely covered the most common cases when doing HTTP requests and we can be sure that we are testing close to real world conditions.  </p>
<h2 id="heading-why-not-hoverfly">Why not Hoverfly?</h2>
<p>Another choice for HTTP integration tests is <a target="_blank" href="https://hoverfly.io/">Hoverfly</a>. It works similar to WireMock but I have come to prefer the latter. The reason is that WireMock is also quite useful when running end-to-end tests that include a browser. Hoverfly (at least the Java library) is limited by using JVM proxies. This might make it faster than WireMock but when e.g. some JavaScript code comes into play it does not work at all. The fact that WireMock starts a webserver is very useful when your browser code also calls some other services directly. You can then mock those with WireMock, too, and write e.g. your Selenium tests.  </p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I hope this article could show you two things:  </p>
<ol>
<li>the importance of integration tests</li>
<li>that WireMock is pretty nice</li>
</ol>
<p>Of course, both topics could fill many more articles. Still, I wanted to give you a feeling of how to use WireMock and what it is capable of. Feel free to check their documentation and try many more things. As an example, testing authentication with WireMock is also possible.</p>
]]></content:encoded></item><item><title><![CDATA[Book review: Programming Beyond Practices by Gregory T. Brown]]></title><description><![CDATA[Picture by chimp CC BY 3.0 license
I have recently finished the book "Programming Beyond Practices" by Gregory T. Brown and want to give you a short book review.  
Despite the title containing the word "programming" the book does not contain any code...]]></description><link>https://blog.code-n-roll.dev/book-review-programming-beyond-practices-by-gregory-t-brown</link><guid isPermaLink="true">https://blog.code-n-roll.dev/book-review-programming-beyond-practices-by-gregory-t-brown</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[#BookReview]]></category><category><![CDATA[programming]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Mon, 25 Jun 2018 16:43:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1659430798303/rmweGgE-h.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Picture by <a target="_blank" href="https://piq.codeus.net/u/chimp">chimp</a> <a target="_blank" href="https://creativecommons.org/licenses/by/3.0/">CC BY 3.0 license</a></p>
<p>I have recently finished the book "Programming Beyond Practices" by Gregory T. Brown and want to give you a short book review.  </p>
<p>Despite the title containing the word "programming" the book does not contain any code at all and this totally fits the intention of the book. In eigth chapters the author shows us things we have to take care of next to writing code. The subtitle "Be more than just a code monkey" emphasizes this even more.  </p>
<p>When I received the book I was surprised that it is relatively short. Having round about 120 pages it is one of the shortest books I own. Initially, I thought that the short size would be a disadvantage. Nevertheless, after having finished the book, I think it is an advantage. One can easily read it again and again without having to expect two weeks of reading. Just doing a recap of a certain aspect of the book or of all chapters can be done quickly. Still, the chapters contain all of the information necessary and I never felt that the author missed something or kept a chapter artificially short. The only time I wanted to read more was the second last chapter where I wanted to know how the company described would proceed. But that was just my interest in the well written story.  </p>
<p>This leads me to the style of writing. The author chose a good way to share his knowledge: every chapter describes a short story of an imaginary project, software, lecture, etc., which contains some interwoven dialogs. These stories make the book easy to read and make the learnings tangible. The chapters all start with a short, general introduction and finish with a summary. Additionally, the author added some questions and exercises to the chapters to force the reader to think a little bit more about the topic presented. Although I did not really do the exercises, I think they can be of great use if one reads this book with others.  </p>
<p>Lastly, the author added three riddles to the book. I could not find the solution to any of them but if anyone managed to solve them, please share the solution with me. Next to that some additional materials can be found on <a target="_blank" href="http://pbpbook.com/">the authors website</a>.  </p>
<p>All in all I can recommend the book and suggest you to read it, too. The price might seem a little bit high for the number of pages but regarding the knowledge it contains it is definitely worth it.  </p>
<p>And what did I learn? Being someone who loves his job for the technical aspects I will try to concentrate more on the problem-solving aspect in the future. Especially with regards to the people involved in the project/software.</p>
]]></content:encoded></item><item><title><![CDATA[Gatling-JDBC on Maven Central]]></title><description><![CDATA[Some time ago I wrote a blog post on the codecentric blog about how to extend Gatling. As accompanying code example I created a small library on GitHub: Gatling-JDBC  
Finally, after keeping the library untouched for several months, I performed the n...]]></description><link>https://blog.code-n-roll.dev/gatling-jdbc-on-maven-central</link><guid isPermaLink="true">https://blog.code-n-roll.dev/gatling-jdbc-on-maven-central</guid><category><![CDATA[maven]]></category><category><![CDATA[JDBC]]></category><category><![CDATA[Gatling]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Sat, 19 May 2018 15:11:00 GMT</pubDate><content:encoded><![CDATA[<p>Some time ago I wrote <a target="_blank" href="https://blog.codecentric.de/en/2017/07/gatling-load-testing-part-2-extending-gatling/">a blog post</a> on the codecentric blog about how to extend Gatling. As accompanying code example I created a small library on GitHub: <a target="_blank" href="https://github.com/codecentric/gatling-jdbc">Gatling-JDBC</a>  </p>
<p>Finally, after keeping the library untouched for several months, I performed the necessary steps to publish it on <a target="_blank" href="https://search.maven.org/#artifactdetails%7Cde.codecentric%7Cgatling-jdbc_2.12%7C1.0.0%7Cjar">Maven Central</a>! This means you can now use de.codecentric.gatling-jdbc version 1.0.0 in your Gatling load tests. I upgraded its dependencies so that is compatible with the latest Gatling version 2.3.1.  </p>
<p>Its usage is described in the blog post or you can take a look at the different simulations <a target="_blank" href="https://github.com/codecentric/gatling-jdbc/tree/master/src/test/scala/de/codecentric/gatling/jdbc">in the test directory</a>. If you encounter any problems or would like to suggest any improvements feel free to open an issue on GitHub.</p>
]]></content:encoded></item><item><title><![CDATA[Improve your test structure with Lambdas and Mockito’s Answer]]></title><description><![CDATA[Refactoring is an important task that should be done from time to time. No matter if you call it technical debt or give it any other way. We sometimes implement features quick and dirty or while implementing we learn better ways how we could achieve ...]]></description><link>https://blog.code-n-roll.dev/improve-your-test-structure-with-lambdas-and-mockitos-answer</link><guid isPermaLink="true">https://blog.code-n-roll.dev/improve-your-test-structure-with-lambdas-and-mockitos-answer</guid><category><![CDATA[mockito]]></category><category><![CDATA[java8]]></category><category><![CDATA[Lambdas]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Tue, 15 May 2018 11:09:00 GMT</pubDate><content:encoded><![CDATA[<p>Refactoring is an important task that should be done from time to time. No matter if you call it technical debt or give it any other way. We sometimes implement features quick and dirty or while implementing we learn better ways how we could achieve our goal.<br />While refactoring is often applied to business logic or the infrastructure related to it, it seems that doing it for tests is often neglected. Therefore, you want to show you <a target="_blank" href="https://blog.codecentric.de/en/2018/02/improve-test-structure-lambdas-mockitos-answer/">in this blog post</a> on the codecentric blog, how lambdas and Mockito's answer can help your tests.</p>
]]></content:encoded></item><item><title><![CDATA[DRY in the 21st century]]></title><description><![CDATA[Due to some discussion arising regarding the DRY principle, especially regarding Microservices and the consideration to either duplicate code or create a library project, I collected my thoughts about this topic. Please find the article in the codece...]]></description><link>https://blog.code-n-roll.dev/dry-in-the-21st-century</link><guid isPermaLink="true">https://blog.code-n-roll.dev/dry-in-the-21st-century</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[clean code]]></category><category><![CDATA[dry]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Tue, 08 May 2018 11:04:00 GMT</pubDate><content:encoded><![CDATA[<p>Due to some discussion arising regarding the DRY principle, especially regarding Microservices and the consideration to either duplicate code or create a library project, I collected my thoughts about this topic. Please find the article <a target="_blank" href="https://blog.codecentric.de/en/2018/01/dry-in-the-21st-century/">in the codecentric blog</a>. If you want to add your thoughts about this topic, feel free to use the comment section there.</p>
]]></content:encoded></item><item><title><![CDATA[Gatling Load Testing Part 2 - Extending Gatling]]></title><description><![CDATA[Although the blog post has been published already some time ago, I would like to point out to the second article related to Gatling Load Testing. Again, published here, in the codecentric blog.You can find the related project on GitHub.]]></description><link>https://blog.code-n-roll.dev/gatling-load-testing-part-2-extending-gatling</link><guid isPermaLink="true">https://blog.code-n-roll.dev/gatling-load-testing-part-2-extending-gatling</guid><category><![CDATA[Gatling]]></category><category><![CDATA[Scala]]></category><category><![CDATA[JDBC]]></category><category><![CDATA[Load Testing]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Tue, 01 May 2018 11:01:00 GMT</pubDate><content:encoded><![CDATA[<p>Although the blog post has been published already some time ago, I would like to point out to the second article related to Gatling Load Testing. Again, published <a target="_blank" href="https://blog.codecentric.de/en/2017/07/gatling-load-testing-part-2-extending-gatling/">here</a>, in the codecentric blog.<br />You can find the related project <a target="_blank" href="https://github.com/rbraeunlich/gatling-jdbc">on GitHub</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Dynamic Validation with Spring Boot Validation]]></title><description><![CDATA[After it has been quite for a while, I have written a new blog post about dynamic validation with Spring Boot validation. The post has been published on the codecentric blog. You can find it here. Enjoy it!]]></description><link>https://blog.code-n-roll.dev/dynamic-validation-with-spring-boot-validation</link><guid isPermaLink="true">https://blog.code-n-roll.dev/dynamic-validation-with-spring-boot-validation</guid><category><![CDATA[Validation]]></category><category><![CDATA[Springboot]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Wed, 22 Nov 2017 16:44:00 GMT</pubDate><content:encoded><![CDATA[<p>After it has been quite for a while, I have written a new blog post about dynamic validation with Spring Boot validation. The post has been published on the codecentric blog. You can find it <a target="_blank" href="https://blog.codecentric.de/en/2017/11/dynamic-validation-spring-boot-validation/">here</a>. Enjoy it!</p>
]]></content:encoded></item><item><title><![CDATA[Gatling Load Testing Part 1 – Using Gatling]]></title><description><![CDATA[This time, it’s not the usual blog entry. I want to point out that my blog post about Gatling load testing has been published on my employer’s blog. If you want to read it you can find it here. Feel free to make comments!]]></description><link>https://blog.code-n-roll.dev/gatling-load-testing-part-1-using-gatling</link><guid isPermaLink="true">https://blog.code-n-roll.dev/gatling-load-testing-part-1-using-gatling</guid><category><![CDATA[Gatling]]></category><category><![CDATA[Load Testing]]></category><category><![CDATA[Scala]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Wed, 21 Jun 2017 17:36:00 GMT</pubDate><content:encoded><![CDATA[<p>This time, it’s not the usual blog entry. I want to point out that my blog post about Gatling load testing has been published on my employer’s blog. If you want to read it you can find it <a target="_blank" href="https://blog.codecentric.de/en/2017/06/gatling-load-testing-part-1-using-gatling/">here</a>. Feel free to make comments!</p>
]]></content:encoded></item><item><title><![CDATA[Extension/Service/Plugin mechanisms in Java]]></title><description><![CDATA[Since I started to deep dive into OSGi I was wondering more and more how frameworks that have some way of extension mechanism, e.g. Apache Camel where you can define your own endpoint or the Eclipse IDE with its plugins, handle finding and instantiat...]]></description><link>https://blog.code-n-roll.dev/extensionserviceplugin-mechanisms-in-java</link><guid isPermaLink="true">https://blog.code-n-roll.dev/extensionserviceplugin-mechanisms-in-java</guid><category><![CDATA[Java]]></category><category><![CDATA[OSGi]]></category><category><![CDATA[Spring]]></category><category><![CDATA[Eclipse]]></category><category><![CDATA[CDI]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Wed, 24 Feb 2016 13:12:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://1.bp.blogspot.com/-DHM0fGJyBQE/VsvmpvTSgJI/AAAAAAAABj8/ygm8gkptdMY/s1600/noun_341771_cc.png" alt /></p>
<p>Since I started to deep dive into OSGi I was wondering more and more how frameworks that have some way of extension mechanism, e.g. <a target="_blank" href="http://camel.apache.org/">Apache Camel</a> where you can define your own endpoint or the <a target="_blank" href="https://www.blogger.com/www.eclipse.org">Eclipse IDE</a> with its plugins, handle finding and instantiating extensions. I remember very well a presentation from the JAX 2013, it was by <a target="_blank" href="https://twitter.com/kaitoedter">Kai Tödter</a>, where he showed the combination of Vaadin and OSGi. While the web app was running he could add and remove menu entries, just by starting and stopping the bundles.<br />For a while now I have taken a look at several approaches on how to create an extensible application and you can find resources for every single method. I want to give a medium sized (not short ;)) overview here of the different ways I know to make a Java application extensible. Also, I will add a list of advantages and disadvantages, from my point of view, to each method. For every method I try to give a simple example.<br />To avoid confusion, when I write about the advantages and disadvantages, I will write from the point of view, as if you want to provide this extension mechanism in your framework, not from the API consumer point of view.  </p>
<h2 id="heading-passing-the-object">Passing the object</h2>
<p>This is the most obvious method. The framework defines a method which takes the SPI interface and you simply pass the object. Camel, next to other methods, makes use of this (example taken from <a target="_blank" href="http://camel.apache.org/how-do-i-add-a-component.html">the Camel FAQ</a>):  </p>
<pre><code class="lang-java">CamelContext context = <span class="hljs-keyword">new</span> DefaultCamelContext();
context.addComponent(<span class="hljs-string">"foo"</span>, <span class="hljs-keyword">new</span> FooComponent(context));
</code></pre>
<p>Internally, Camel doesn't do much magic (code taken from <a target="_blank" href="https://github.com/apache/camel/blob/master/camel-core/src/main/java/org/apache/camel/impl/DefaultCamelContext.java#L369">Camel on GitHub</a>).  </p>
<pre><code class="lang-java"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">addComponent</span><span class="hljs-params">(String componentName, <span class="hljs-keyword">final</span> Component component)</span> </span>{
    ObjectHelper.notNull(component, <span class="hljs-string">"component"</span>);
    <span class="hljs-keyword">synchronized</span> (components) {
        <span class="hljs-keyword">if</span> (components.containsKey(componentName)) {
            <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> IllegalArgumentException(<span class="hljs-string">"Cannot add component as its already previously added: "</span> + componentName);
        }
        component.setCamelContext(<span class="hljs-keyword">this</span>);
        components.put(componentName, component);
        <span class="hljs-keyword">for</span> (LifecycleStrategy strategy : lifecycleStrategies) {
            strategy.onComponentAdd(componentName, component);
        }

        <span class="hljs-comment">// keep reference to properties component up to date</span>
        <span class="hljs-keyword">if</span> (component <span class="hljs-keyword">instanceof</span> PropertiesComponent &amp;&amp; <span class="hljs-string">"properties"</span>.equals(componentName)) {
            propertiesComponent = (PropertiesComponent) component;
        }
    }
}
</code></pre>
<p>Every component has to have an unique name and is somehow bound to a lifecycle. Removal of a component is also possible, but has to be made somewhere from the user code.  </p>
<h3 id="heading-advantages">Advantages</h3>
<ul>
<li>Easy and straightforward</li>
<li>No need for an additional framework</li>
<li>Compiler checks for the correct interface</li>
</ul>
<h3 id="heading-disadvantages">Disadvantages</h3>
<ul>
<li>Access to central class (the plugin/service/component holder) is necessary</li>
<li>Allowing changes during the runtime is possible but complicated, since it has to be assured the component is removed everywhere</li>
<li>Your framework has to take care of the whole component lifecycle and any additional requirements it enforces</li>
</ul>
<h2 id="heading-interface-and-reflection">Interface and Reflection</h2>
<p>This method is used quite often (basically it is also how the ServiceLoader works, see next section) and you can find it with small variances. The differences are where and how exactly interface and implementation name reach the application. Placing them somewhere inside a properties file or passing them to the framework during startup are most common. The implementation is then instantiated using reflection. Creating a context with an <code>InitialContextFactory</code> works like this e.g.:  </p>
<pre><code class="lang-java">  Properties env = <span class="hljs-keyword">new</span> Properties();
  env.put(Context.INITIAL_CONTEXT_FACTORY,
          <span class="hljs-string">"org.jboss.naming.remote.client.InitialContextFactory"</span>);
</code></pre>
<h3 id="heading-advantages">Advantages</h3>
<ul>
<li>Easy and straightforward</li>
<li>No need for an additional framework</li>
<li>No need to provide central class (in properties file approach)</li>
</ul>
<h3 id="heading-disadvantages">Disadvantages</h3>
<ul>
<li>No type safety (if text based)</li>
<li>Your framework has to take care of the whole lifecycle and any additional requirements it enforces</li>
<li>Check for correct wiring only during runtime (if text based, check either at startup or when the code is being called, where the former is better than the latter)</li>
</ul>
<h2 id="heading-javautilserviceloader">java.util.ServiceLoader</h2>
<p>Frameworks using the <code>java.util.ServiceLoader</code> can also be found quite often. What the ServiceLoader does is, it uses during runtime a ClassLoader and checks the META-INF/services directory for a text file, whose name equals the passed interface (SPI) name and then reads the class name inside that file. Then it instantiates the class via Reflection. All the magic happens in the <code>LazyIterator</code> inside the ServiceLoader class (see <a target="_blank" href="http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/ServiceLoader.java#ServiceLoader.LazyIterator">OpenJDK</a>). Basically, it's just reading a file and instantiating the object. E.g. Camel and HiveMQ use this method.  </p>
<h3 id="heading-advantages">Advantages</h3>
<ul>
<li>Easy and straightforward</li>
<li>ServiceLoader is part of JDK</li>
<li>No need for an additional framework</li>
</ul>
<h3 id="heading-disadvantages">Disadvantages</h3>
<ul>
<li>No lifecycle</li>
<li>Class has to provide standard constructor</li>
<li>Support for runtime changes must be implemented (as mentioned <a target="_blank" href="https://docs.oracle.com/javase/tutorial/ext/basics/spi.html">here</a>)</li>
<li>Check for correct wiring only during runtime (the filename or the string inside the file could be wrong)</li>
</ul>
<h2 id="heading-eclipse-extension-points">(Eclipse) Extension Points</h2>
<p><img src="https://4.bp.blogspot.com/-L2zZrps1obE/Vs2p0nx2BhI/AAAAAAAABkM/uIaUsLYpgWs/s1600/687474703a2f2f747261632e6564676577616c6c2e6f72672f7261772d6174746163686d656e742f77696b692f547261634465762f436f6d706f6e656e744172636869746563747572652f78746e70742e706e67.png" alt /></p>
<p>Picture under BSD license, see <a target="_blank" href="https://github.com/progrium/go-extpoints/blob/master/LICENSE">here</a></p>
<p>As far as I know the concept of Extension Points never got popular outside Eclipse, although it is possible to include them in every application. To achieve loose coupling the definition of places where you can add your plugin and the plugins themselves is extracted into XML files.<br />To define an extension point you need something like this:  </p>
<pre><code class="lang-xml"><span class="hljs-meta">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">?eclipse</span> <span class="hljs-attr">version</span>=<span class="hljs-string">"3.4"</span>?&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">extension-point</span>
     <span class="hljs-attr">id</span>=<span class="hljs-string">"de.blogspot.wrongtracks.FooService"</span>
     <span class="hljs-attr">name</span>=<span class="hljs-string">"FooService"</span>
     <span class="hljs-attr">schema</span>=<span class="hljs-string">"schema/de.blogspot.wrongtracks.FooService.exsd"</span>/&gt;</span>
</code></pre>
<p>The extension provider then has to define an appropriate extension for that point:  </p>
<pre><code class="lang-xml"><span class="hljs-meta">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">?eclipse</span> <span class="hljs-attr">version</span>=<span class="hljs-string">"3.4"</span>?&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">plugin</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">extension</span>
         <span class="hljs-attr">point</span>=<span class="hljs-string">"de.blogspot.wrongtracks.FooService"</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">implementation</span>
            <span class="hljs-attr">class</span>=<span class="hljs-string">"com.example.impl.FooServiceImpl"</span>
            <span class="hljs-attr">id</span>=<span class="hljs-string">"com.example.impl.FooServiceImpl"</span>
            <span class="hljs-attr">name</span>=<span class="hljs-string">"FooServiceImpl"</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">implementation</span>&gt;</span>
   <span class="hljs-tag">&lt;/<span class="hljs-name">extension</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">plugin</span>&gt;</span>
</code></pre>
<p>I got to admit, that I am not completely sure how exactly you can integrate the extension points, but I guess you will need quite a lot from the basic Eclipse runtime. There is a <a target="_blank" href="https://angelozerr.wordpress.com/2010/09/14/eclipse-extension-points-and-extensions-without-osgi/">blog post</a>, which explains how you can use extension points without depending on OSGi.  </p>
<h3 id="heading-advantages">Advantages</h3>
<ul>
<li>Extensions can be added during runtime</li>
<li>Good tool support inside Eclipse</li>
<li>Wrong wiring only affects single extension</li>
<li>Loose coupling (more or less, since the extensions depend on the extension point id)</li>
</ul>
<h3 id="heading-disadvantages">Disadvantages</h3>
<ul>
<li>Dependencies to Eclipse</li>
<li>Overhead from the Eclipse platform (I actually cannot prove this point but I assume there must be a considerate overhead involved in comparison to the previous methods)</li>
<li>Check for correct wiring only during runtime</li>
</ul>
<h2 id="heading-spring-xml">Spring XML</h2>
<p><img src="https://1.bp.blogspot.com/-lTQ8bziCBiI/Vs2vSt8mfXI/AAAAAAAABkk/98VaQvrsMlM/s1600/leaf-299931_640.jpg" alt /></p>
<p>The Spring framework tried to find a way for loosely coupled components long before CDI, as we know it today, appeared. Their solution was an XML file in which the different classes are being wired together (I am well aware of the fact that nowadays there are also <a target="_blank" href="http://docs.spring.io/autorepo/docs/spring/4.2.x/spring-framework-reference/html/beans.html#beans-factory-metadata">other ways</a>, but since they are also based on annotations they don't differ enough from CDI as that I'll give them an own paragraph). In the basic XML file you define all your beans and Spring will take care of the instantiation. It is also possible to distribute the configuration among several XML files. A very simple example (taken and modified from <a target="_blank" href="http://docs.spring.io/autorepo/docs/spring/4.2.x/spring-framework-reference/html/beans.html#beans-factory-instantiation">the Spring documentation</a>) looks like this:  </p>
<pre><code class="lang-xml"><span class="hljs-meta">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">beans</span> <span class="hljs-attr">xmlns</span>=<span class="hljs-string">"http://www.springframework.org/schema/beans"</span>
    <span class="hljs-attr">xmlns:xsi</span>=<span class="hljs-string">"http://www.w3.org/2001/XMLSchema-instance"</span>
    <span class="hljs-attr">xsi:schemaLocation</span>=<span class="hljs-string">"http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd"</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">bean</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"accountDao"</span>
        <span class="hljs-attr">class</span>=<span class="hljs-string">"org.springframework.samples.jpetstore.dao.jpa.JpaAccountDao"</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">bean</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">bean</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"petStore"</span> <span class="hljs-attr">class</span>=<span class="hljs-string">"org.springframework.samples.jpetstore.services.PetStoreServiceImpl"</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">property</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"accountDao"</span> <span class="hljs-attr">ref</span>=<span class="hljs-string">"accountDao"</span>/&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">bean</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">beans</span>&gt;</span>
</code></pre>
<p>If you want to provide your users a way to add their services/plugins to the framework, you'll have to provide a setter method where the users can add their object. E.g. like this (taken from <a target="_blank" href="https://docs.camunda.org/manual/7.3/guides/user-guide/#spring-framework-integration-process-engine-configuration-configuring-a-process-engine-plugin-in-spring">camunda documentation</a>):  </p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">bean</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"processEngineConfiguration"</span> <span class="hljs-attr">class</span>=<span class="hljs-string">"org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration"</span>&gt;</span>
  ...
  <span class="hljs-tag">&lt;<span class="hljs-name">property</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"processEnginePlugins"</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">list</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">bean</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"spinPlugin"</span> <span class="hljs-attr">class</span>=<span class="hljs-string">"org.camunda.spin.plugin.impl.SpinProcessEnginePlugin"</span> /&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">list</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">property</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">bean</span>&gt;</span>
</code></pre>
<h3 id="heading-advantages">Advantages</h3>
<ul>
<li>Spring is lightweigth</li>
<li>Lifecycle support from Spring</li>
</ul>
<h3 id="heading-disadvantages">Disadvantages</h3>
<ul>
<li>XML needs to be maintained</li>
<li>No auto detection, users have to write the XML when they want to add something</li>
<li>The Spring IoC container is needed</li>
<li>Correct wiring is only checked at startup</li>
</ul>
<h2 id="heading-osgi-services">OSGi Services</h2>
<p>OSGi was created embracing runtime changes and bundles dynamically providing and removing their services. With this in mind OSGi strongly supports applications being extended by services, provided by different bundles. The simplest approach is to implement a <code>ServiceListener</code> or a <code>ServiceTracker</code>. Both should be created on bundle start and they will react when a new implementation of the service appears. A <code>ServiceListener</code> can be as simple as this (taken from <a target="_blank" href="https://www.blogger.com/www.knopflerfish.org/osgi_service_tutorial.html">the Knoplerfish tutorial</a>):  </p>
<pre><code class="lang-java"> ServiceListener sl = <span class="hljs-keyword">new</span> ServiceListener() {
   <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">serviceChanged</span><span class="hljs-params">(ServiceEvent ev)</span> </span>{
      ServiceReference sr = ev.getServiceReference();
      <span class="hljs-keyword">switch</span>(ev.getType()) {
        <span class="hljs-keyword">case</span> ServiceEvent.REGISTERED:
          {
             HttpService http = (HttpService)bc.getService(sr);
             http.registerServlet(...);
          }
          <span class="hljs-keyword">break</span>;
        <span class="hljs-keyword">default</span>:
          <span class="hljs-keyword">break</span>;
      }
   }
 };

 String filter = <span class="hljs-string">"(objectclass="</span> + HttpService.class.getName() + <span class="hljs-string">")"</span>;
 bc.addServiceListener(sl, filter);
</code></pre>
<p>Where bc is a <code>BundleContext</code> object. And a <code>ServiceTracker</code> can be used like this:  </p>
<pre><code class="lang-java">ServiceTracker&lt;HttpService,HttpService&gt; serviceTracker = <span class="hljs-keyword">new</span> ServiceTracker&lt;HttpService, HttpService&gt;(bc, HttpService.class, <span class="hljs-keyword">null</span>);
serviceTracker.open();
</code></pre>
<p>There are more elegant ways to get hold of an OSGi service using <a target="_blank" href="http://wiki.osgi.org/wiki/Blueprint">Blueprint</a>, <a target="_blank" href="http://wiki.osgi.org/wiki/Declarative_Services">Declarative Services</a> or the <a target="_blank" href="http://felix.apache.org/documentation/subprojects/apache-felix-dependency-manager.html">Apache Felix Dependency Manager</a> but the <code>ServiceListener</code> is the basic way.  </p>
<h3 id="heading-advantages">Advantages</h3>
<ul>
<li>OSGi lifecycle support</li>
<li>Changes during runtime "encouraged" ;)</li>
<li>Compiler checks wiring (not for the <code>ServiceListener</code> but for the rest)</li>
<li>Problems with services are restricted to single bundle</li>
</ul>
<h3 id="heading-disadvantages">Disadvantages</h3>
<ul>
<li>You have to buy the whole OSGi package: imports, exports, bundles and everything</li>
<li>Having the full OSGi lifecycle makes the world more complicated since every service can disappear at every moment</li>
</ul>
<h4 id="heading-note-about-pojosrosgi-light">Note about PojoSR/OSGi Light</h4>
<p>Since the biggest disadvantage of OSGi is that you have to get the whole package, I want to mention here another approach, which is called PojoSR or OSGi Light. The goal of it is to give you the OSGi service concept without the rest that comes with OSGi. Unfortunately, I could not find much documentation about it and the activity around this project seems to be very low at the moment. There is an article <a target="_blank" href="http://wiki.osgi.org/wiki/PojoSR">here</a> and the PojoSR framework <a target="_blank" href="https://code.google.com/archive/p/pojosr/wikis/Usage.wiki">itself</a>. Also, it looks like PojoSR is now a part of Apache Felix called <a target="_blank" href="https://github.com/apache/felix/tree/trunk/connect">"Connect"</a>, but its version is 0.1.0. So if anyone of you knows more about it, please let me know.</p>
<h2 id="heading-cdi">CDI</h2>
<p><img src="https://2.bp.blogspot.com/-WGsPq53SzAU/Vs2rM8Jp20I/AAAAAAAABkY/uaQIFJkiGHo/s1600/noun_183548_cc.png" alt class="image--right mx-auto mr-0" /></p>
<p>Contexts and Dependency injection was a big step for Java EE, allowing developers to write more loosely coupled code. The CDI container takes care of automagically wiring the different parts together. The developer only has to use the correct annotations. Depending on which CDI beans are present at runtime, concrete implementations can be changed without changing the code that uses them. When trying to use a class the basic injection looks like this:  </p>
<pre><code class="lang-java">    <span class="hljs-meta">@Inject</span> <span class="hljs-keyword">private</span> MyServiceInterface service;
</code></pre>
<p>If there is need to get all of the implementations (which we actually want here), then the class <code>Instance</code> must be used:  </p>
<pre><code class="lang-java">    <span class="hljs-meta">@Inject</span> <span class="hljs-meta">@Any</span> <span class="hljs-keyword">private</span> Instance&lt;MyServiceInterface&gt; services;
</code></pre>
<p>Since <code>Instance</code> is an <code>Iterable</code> a simple for-each loop can be used to access all the objects. Alternatively the <code>select()</code> method can be used to further specify requirements.  </p>
<h3 id="heading-advantages">Advantages</h3>
<ul>
<li>Compiler checks for correct type</li>
<li>CDI container checks correct wiring at startup</li>
<li>Part of JEE standard but can also be used without application serve (use a JSR-330 implementation like Guice or HK2)r</li>
<li>CDI lifecycle support</li>
</ul>
<h3 id="heading-disadvantages">Disadvantages</h3>
<ul>
<li>A CDI container is needed</li>
<li>Changes during runtime are not possible</li>
<li><a target="_blank" href="http://www.annotatiomania.com/">Annotatiomania</a> (at least if you don't watch out)</li>
</ul>
<h2 id="heading-summary">Summary</h2>
<p>As you can see many different frameworks/methods evolved in the Java ecosystem. Every single one with its specific advantages and disadvantages. I think we can summarize the different extension mechanisms as three types (with their members):  </p>
<ol>
<li>String and well-known location ("Interface and Reflection", "ServiceLoader", "(Eclipse) Extension Points", "Spring XML")</li>
<li>Programmatic wiring ("Passing the object", "Interface and Reflection", "OSGi Services")</li>
<li>Classpath scanning ("CDI")</li>
</ol>
<p>Of course the three types are not exclusive. You may provide your users more than one way and let them choose. Also CDI is not exactly the only framework that uses classpath scanning. Spring with its two other ways for configuring the IoC container relies on that method, too.  </p>
<p>I hope this article provides an good and sufficient overview of the different methods on how to create an extensible framework. Choosing the right one will make your users surely happy. If you know another method, which I forgot, please let me know, I will gladly add it here.  </p>
<p>Please note that the lists of advantages and disadvantages are based on my reasoning. I tried to be objective but like every programmer I have my favorites and my experiences with the frameworks that may make me a little bit biased.</p>
]]></content:encoded></item><item><title><![CDATA[What can capabilities do for your processes?]]></title><description><![CDATA[Before we release camunda BPM OSGi 2.0 I want to do a little bit more of advertisement for it and show what is possible with the new version. One change in the new version will be, that it depends on OSGi 4.3 and no longer 4.2. One change, besides th...]]></description><link>https://blog.code-n-roll.dev/what-can-capabilities-do-for-your-processes</link><guid isPermaLink="true">https://blog.code-n-roll.dev/what-can-capabilities-do-for-your-processes</guid><category><![CDATA[OSGi]]></category><category><![CDATA[camunda]]></category><category><![CDATA[BPMN]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Sat, 20 Feb 2016 10:12:00 GMT</pubDate><content:encoded><![CDATA[<p>Before we release camunda BPM OSGi 2.0 I want to do a little bit more of advertisement for it and show what is possible with the new version. One change in the new version will be, that it depends on OSGi 4.3 and no longer 4.2. One change, besides the fact that I can now use generics in the code (yay!) is that with OSGi 4.3 the capabilities headers will work. So, what's so impressive about them?  </p>
<h2 id="heading-capability-headers">Capability headers</h2>
<p>The capability headers are two header <code>Provide-Capability</code> and <code>Require-Capability</code>. They are a further abstraction of the <code>Import-Package</code> and <code>Export-Package</code> headers we all (should ;)) know. But with the capability headers you are not as limited as with the package headers. Arbitrary things can be defined, e.g.  </p>
<pre><code>Provide<span class="hljs-operator">-</span>Capability: sensor; <span class="hljs-keyword">type</span><span class="hljs-operator">=</span>gyro
</code></pre><p>would be a valid statement. But you are not limited to one attribute:  </p>
<pre><code>Provide<span class="hljs-operator">-</span>Capability: sensor; <span class="hljs-keyword">type</span><span class="hljs-operator">=</span>heat; minTemp<span class="hljs-operator">=</span><span class="hljs-number">0</span>; maxTemp<span class="hljs-operator">=</span><span class="hljs-number">100</span>
</code></pre><p>is also possible. And the bundle that requires such capabilities can use an LDAP filter expression:  </p>
<pre><code>Require-Capability: sensor; <span class="hljs-attribute">filter</span>:=<span class="hljs-string">"(&amp;(type=type=heat)(minTemp=0)(maxTemp=100))"</span>
</code></pre><p>That ways it is possible to find exactly what is needed in a way that allows to specify more than just packages and versions.<br />How can you use this for your business processes?  </p>
<h2 id="heading-capability-headers-for-processes">Capability headers for processes</h2>
<p>One use-case that came quickly to my mind were process definitions that depend on each other, e.g. if you have a process with a call activity. An example could look like this (please excuse that I didn't prepare an exhaustive example):  </p>
<p><img src="https://4.bp.blogspot.com/-SXOEuII0eVA/VsBnLA4iNvI/AAAAAAAABjE/i8fgK5zu5ZM/s1600/diagram.png" alt /></p>
<p>Let's call this one the "Hunger process". And the callee process, the "Phone process" can be as simple as this:  </p>
<p><img src="https://1.bp.blogspot.com/-_-PICo3bei8/VsBnmntjenI/AAAAAAAABjM/1HkT2MYPuvo/s1600/diagram%25281%2529.png" alt /></p>
<p>The last time I checked there is nothing that would stop you to try to start the Hunger process although the Phone process hasn't been deployed yet. If the Hunger process would be something that you want to start automatically you would run into a nasty exception. Here, the headers can help. You could simply describe in your MANIFEST that you require the Phone process before your bundle can be started:  </p>
<pre><code>Require-Capability: process; <span class="hljs-attribute">filter</span>:=<span class="hljs-string">"(key=Phone_process)"</span>
</code></pre><p>You could also add a version number or whatever seems useful. The bundle containing the Phone process should then of course contain the appropriate part:  </p>
<pre><code>Provide<span class="hljs-operator">-</span>Capability: process; key<span class="hljs-operator">=</span>Phone_process
</code></pre><p>So, when you deploy the bundle with the Hunger process it cannot be started without the bundle containing the Phone process. That ways you can manage your process interdependencies without running into exceptions.<br />Finally, if you use the maven-bundle-plugin I want to give you a short example.  </p>
<h2 id="heading-setting-the-headers-with-the-maven-bundle-plugin">Setting the headers with the maven-bundle-plugin</h2>
<p>With the maven-bundle-plugin it is really easy to set the headers. I'll suppose that you use <code>&lt;packaging&gt;bundle&lt;/packaging&gt;</code> in your POM. Here's how you can set the headers:  </p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">plugin</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.apache.felix<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>maven-bundle-plugin<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">extensions</span>&gt;</span>true<span class="hljs-tag">&lt;/<span class="hljs-name">extensions</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">configuration</span>&gt;</span>
     <span class="hljs-tag">&lt;<span class="hljs-name">instructions</span>&gt;</span>
       <span class="hljs-tag">&lt;<span class="hljs-name">Provide-Capability</span>&gt;</span>process; key=Phone_process<span class="hljs-tag">&lt;/<span class="hljs-name">Provide-Capability</span>&gt;</span>
     <span class="hljs-tag">&lt;/<span class="hljs-name">instructions</span>&gt;</span>
   <span class="hljs-tag">&lt;/<span class="hljs-name">configuration</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">plugin</span>&gt;</span>
</code></pre>
<p>See, piece of cake ;)  </p>
<p>I hope I could give you some idea how you could use the capability headers that OSGi 4.3 introduced. This was just a quick example but I think it shows nicely, how OSGi can support your BPMN processes.</p>
]]></content:encoded></item><item><title><![CDATA[camunda BPM OSGi - Event Bridge]]></title><description><![CDATA[I have implemented the eventing feature already some months ago but I haven't managed to advertise it a little bit more until now. So, let's praise my work ;)  
I'll start with some background information, which you can skip if you're familiar with c...]]></description><link>https://blog.code-n-roll.dev/camunda-bpm-osgi-event-bridge</link><guid isPermaLink="true">https://blog.code-n-roll.dev/camunda-bpm-osgi-event-bridge</guid><category><![CDATA[camunda]]></category><category><![CDATA[Java]]></category><category><![CDATA[OSGi]]></category><dc:creator><![CDATA[Ronny Bräunlich]]></dc:creator><pubDate>Sat, 13 Feb 2016 10:05:00 GMT</pubDate><content:encoded><![CDATA[<p>I have implemented the eventing feature already some months ago but I haven't managed to advertise it a little bit more until now. So, let's praise my work ;)  </p>
<p>I'll start with some background information, which you can skip if you're familiar with camunda BPM and the OSGi EventAdmin. Then, some information about the what and how follows.  </p>
<p>Let's start with OSGi eventing.  </p>
<h2 id="heading-osgi-event-admin">OSGi Event Admin</h2>
<p>The Event Admin is a part of the OSGi Compendium Specification. It is a way to communicate between bundles in a decoupled way by sending events. The communication follows a publish/subcribe scheme.  </p>
<p>One bundle obtains the <code>EventAdmin</code> service, creates an <code>Event</code> object and sends it. Every event is created with a certain topic and can contain arbitrary String properties in a key-value way. Topics are hierarchical separated by a "/" and wildcards are allowed. E.g. <code>org/osgi/framework/BundleEvent/STARTED</code> is a topic used by the OSGi framework.  </p>
<p>Events can be sent in a synchronous or asynchronous way and additional LDAP filters can be used based on the properties.  </p>
<p>You can find a good example on the <a target="_blank" href="http://felix.apache.org/documentation/subprojects/apache-felix-event-admin.html">Apache Felix website</a>.  </p>
<p>Now that we know a little bit about the EventAdmin let's take a look at camunda BPM.  </p>
<h2 id="heading-camunda-bpm-events">camunda BPM events</h2>
<p>During the execution of a process certain events occur, e.g. a task is being assigned or a process end. To be able to "see" those events the user has to register either an <code>ExecutionListener</code> or a <code>TaskListener</code> (for more details see <a target="_blank" href="https://docs.camunda.org/manual/7.4/user-guide/process-engine/delegation-code/#execution-listener">here</a> and <a target="_blank" href="https://docs.camunda.org/manual/7.4/user-guide/process-engine/delegation-code/#task-listener">here</a>).  </p>
<p>The common way to register the listeners is to directly add them to the process definition, i.e. the .bpmn file. But there are certainly cases where we do not own the process file but would like to receive events (e.g. for monitoring).  </p>
<p>Let's see how to achieve this in an OSGi environment.  </p>
<h2 id="heading-camunda-bpm-osgi-event-bridge">camunda BPM OSGi - Event Bridge</h2>
<p>I gotta admit the idea of an event bridge is not my own, because the CDI extension for camunda BPM already has an <a target="_blank" href="https://docs.camunda.org/manual/7.4/user-guide/cdi-java-ee-integration/the-cdi-event-bridge/">CDI event bridge</a>. Anyways, for OSGi this feature was missing. I'll explain to you what happens internally and how you can use it.  </p>
<h3 id="heading-what-happens">What happens?</h3>
<p>The OSGi event bridge implementation exports a service that is a <code>BpmnParseListener</code>. Whenever the engine parses a process definition this listener will become active and attach <code>TaskListener</code> and <code>ExecutionListener</code> wherever possible. But these listeners aren't full implementations. They are dynamic proxies with a special <code>InvocationHandler</code>.  </p>
<p>When the <code>InvocationHandler</code> is being invoked it checks if the OSGi event bridge is still active and if the <code>EventAdmin</code> is present. If yes, it instantiates a new <code>OSGiEventDistributor</code>, which creates a new event and fills the properties.  </p>
<p>I've tried to use all properties the camunda events provide and put them into the event properties. You can see a full list in <a target="_blank" href="https://github.com/camunda/camunda-bpm-platform-osgi/blob/master/camunda-bpm-osgi-eventing-api/src/main/java/org/camunda/bpm/extension/osgi/eventing/api/BusinessProcessEventProperties.java">this class</a>.  </p>
<p>This is basically what is happening. So, what can you do with the event bridge?  </p>
<h3 id="heading-how-to-use-it">How to use it?</h3>
<p>Before you can make use of the OSGi event bridge you have to add the <code>OSGiEventBridgeActivator</code> as a <code>BpmnParseListener</code> to your <code>ProcessEngineConfiguration</code>. You do this with the method <code>setCustomPreBPMNParseListeners()</code>. Unfortunately, there is no way to add the listener to an already created engine. After adding the listener events are being published. The event topics are:  </p>
<ul>
<li><code>org/camunda/bpm/extension/osgi/eventing/TaskEvent</code></li>
<li><code>org/camunda/bpm/extension/osgi/eventing/Execution</code></li>
</ul>
<p>Of course you can use an asterisk after <code>../eventing/</code> to match both.  </p>
<p>Wherever you want to listen to events, you can create your own <code>EventHandler</code> and subscribe to the topic you need/want. A simple example would be:  </p>
<pre><code class="lang-java">EventHandler eventHandler = <span class="hljs-keyword">new</span> EventHandler() {
  <span class="hljs-meta">@Override</span>
  <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">handleEvent</span><span class="hljs-params">(Event event)</span> </span>{
    Logger.getLogger(<span class="hljs-string">"Event occured: "</span> + event.getTopic());
  }
};
Dictionary props = <span class="hljs-keyword">new</span> Hashtable();
props.put(org.osgi.service.event.EventConstants.EVENT_TOPIC, org.camunda.bpm.extension.osgi.eventing.api.Topics.ALL_EVENTING_EVENTS_TOPIC);
bundleContext.registerService(EventHandler.class.getName(), eventHandler, props);
</code></pre>
<p>Since many information is inside the event properties you can also use a more sophisticated LDAP filter expression based on that information. E.g. if you only want to receive events for a certain process you can do this:  </p>
<pre><code class="lang-java">EventHandler eventHandler = <span class="hljs-keyword">new</span> EventHandler() {
...
};
Dictionary&lt;String, String&gt; props = <span class="hljs-keyword">new</span> Hashtable&lt;String, String&gt;();props.put(EventConstants.EVENT_TOPIC, Topics.ALL_EVENTING_EVENTS_TOPIC);
props.put(EventConstants.EVENT_FILTER, <span class="hljs-string">"(processDefinitionId=invoice"</span>);
bundleContext.registerService(EventHandler.class.getName(), eventHandler, props);
</code></pre>
<p>And that's it. At the moment there is no way to limit the applications that are allowed to receive events, so everybody can see all the events if he subscribes to them. If you have an idea how to do this in a nice way, please let me know.  </p>
<p>I hope you can make good use of the OSGi event bridge. My plan is to release camunda BPM OSGi 2.0.0 (which includes the event bridge) shortly after camunda BPM 7.5.0 is being released.</p>
]]></content:encoded></item></channel></rss>