Java Money Profiling

Made available a new repository on GitHub:

https://github.com/MartinanderssonDotcom/money-profiling

Description copied from the above link:

Java Money Profiling explores a few different ways to model a monetary amount in Java.

It also provides unit tests and benchmarks that demonstrate relevant API:s and output profiling results; most notably time cost for serialization and byte sizes.

TypeScript with Gradle

There’s two Gradle plugins on the market today that both promise TypeScript support:

https://github.com/sothmann/typescript-gradle-plugin

https://github.com/prezi/gradle-typescript-plugin

Only problem is that they won’t work and their documentation sometimes lie with false examples, and more often, lack real information forcing you to study source code just to get a clue. Some problems I could work around but in the end, I never got any one of the plugins to actually work on my Windows machine using Gradle 2.3.6 (versions of the plugins tested is 1.0.6 and 2.2.4 respectively). Big big thank you to the authors of these plugins, but please improve the docs =)

Turns out it wasn’t so hard to put together a working TypeScript compilation task anyways. The benefit is not only that it work, you reduce the dependencies of your build script as well =) The following is the build task that I scrapped together today, but it isn’t much tested and I bet it isn’t without any flaws. So to serve as inspiration only:

task compileTs {
    def tsSrcDir = "$projectDir/src/main/ts"
    def tsBuildDir = "$buildDir/ts"
    
    def projectToSrc = projectDir.toPath()
            .relativize(java.nio.file.Paths.get(tsSrcDir))
    
    def projectToBuild = projectDir.toPath()
            .relativize(java.nio.file.Paths.get(tsBuildDir))
    
    group = 'build'
    description = "Compile TypeScript .ts file from \"$projectToSrc\" to \"$projectToBuild\"."
    
    // Support incremental builds:
    inputs.dir tsSrcDir
    outputs.dir tsBuildDir
    
    def tsc = {
        // What to execute:
        executable = 'tsc' // <-- TypeScript Compiler
        
        // Redirect output:
        standardOutput = System.out;
        
        // Which files to compile:
        def files = []
        file(tsSrcDir).eachFileRecurse groovy.io.FileType.FILES, {
            if (it.name.endsWith('.ts')) {
                files << it.absolutePath
            }
        }
        args files
    }
    
    // compile
    doLast {
        exec tsc << { // <-- "tsc << {}" combines two closures into one

            // Outdir:
            args '--outDir'
            args "$tsBuildDir/compiled"
        }
    }
    
    // combine (put a "_references.ts" in root to more easily specify the dependency graph)
    doLast {
        exec tsc << {
            args '--out'
            args "$tsBuildDir/combined/${project.name}.js"
        }
    }
}

Java EE and JavaFX real time live chat

Uploaded a video of my Java live chat solution, enjoy:

Description stolen from YouTube:

This video demonstrate an awesome live chat software written in Java EE 7, JavaFX and TypeScript.

The live chat software features some really cool things like real time live typing making the chat feel like a true conversation, and a busy queue for website visitors waiting for busy or offline chat agents to become available. As far as I know, no other live chat solution on the market today can offer features like that.

You can try the application yourself. Go to http://www.martinandersson.com

SRP, AES/GCM and chunked file transfer over WebSocket

I recently made a Java proof of concept application that demonstrate Secure Remote Protocol (SRP), AES/GCM encrypted + chunked file transfer over WebSocket, fronted with a nice fat client:

There are many ways for a Java EE application to receive binary data from a WebSocket and this application is so elite that the user may change which strategy to use for each file transfer during runtime. After each transfer, you’ll see some minor statistics about time consumed for the transfer as well as theoretical time consumed for decryption on the server side.

Anyways, loads of documentation and all files can be found here:
https://github.com/martinanderssondotcom/secure-login-file-transfer

WildFly/Undertow WebSocket Exception Handling

When a @ServerEndpoint @OnMessage handler throw a RuntimeException, the JSR-356 implementation Undertow (used in WildFly) force-close the endpoint and doesn’t even call the @OnClose method. Undertow then call the @OnError method. But at this point in time, the application code can no longer recover or handle the exception. This practice violate the JSR-356 specification and effectively kill Java exception programming. GlassFish with his Tyrus implementation work like a charm, at least in this context. Otherwise, I must say WildFly outperform GlassFish in almost all other ways.

Filed a bug here: https://issues.jboss.org/browse/UNDERTOW-284

Wrote a test application here (the repository’s README.md file offer a workaround): https://github.com/MartinanderssonDotcom/websocket-exception-handling

A concurrent Deque manager in Java

One important component of my live chat is the queue system. If I’m not online or too busy, then web users are automatically put in queue and change of their positions in the queue are continuously reported on a best-effort basis.

Of course I am only one dude today, but the live chat software is built with utility in mind. It is a complete live chat solution that can handle a crazy amount of users on both ends of the server communicating in all directions. The domain model is centered around one key entity: the Conversation. Of course the subject-based conversation is also a resource that a user can stand in line for, if there’s no more room for strangers. The live chat software can be deployed as a pure help desk solution for companies to lure customers in, or as a peer-to-peer chat application for friends and colleagues, or as an online chat service for strangers that want to hookup and participate in conversations based on subject. Or as any combination thereof.

One of the challenges that faced me was putting things in a queue, based on a magnitude of different resources. To solve that problem I wrote a ConcurrentDequeManager that transparently manages deques so that client code doesn’t have to. The amount of deques grow and shrink on demand and elements (for example web users) can have their position automatically reported to them as the position changes. Best of all, it is lock-free and superfast. It’s almost a constant-time operation to lookup the size of a deque. In short, client code doesn’t have to worry one bit about concurrency anymore.

Read more and download: https://github.com/MartinanderssonDotcom/ConcurrentDequeManager

The dilemma of weak listeners in JavaFX

There exists a lot of confusion regarding the life-cycle of JavaFX property listeners and event handlers. For example, many believe that an added property listener must be explicitly removed or memory leaks. To illustrate, here is a quote from the book JavaFX 8: Introduction by Example chapter 3 (page 78):

One last thing to point out is that it is important to clean up listeners by removing them. To remove them you will invoke the removeListener() method [..].

The remove method referred to is Observable.removeListener().

I tried to google all my books and the Internet for an explanation as to why it is “important”. All I could find is even more claims about how “important” it is to remove listeners. But knowing just a handful of Java memory management and reference types, I must conclude that the “importance” is a fallacy.

When a String goes out of scope and becomes unreachable, is it important to delete/nullify the char[] contents that String wraps? No. Just as the String become unreachable and eligible for garbage collection, so to do the char[] contents (actually, sharing the contents across many String instances is not forbidden by JLS and was the cause of bug in the Oracle distributed JDK prior to 1.6). It doesn’t matter if the box know about the cat inside of it, if you throw the box into the ocean. If the box isn’t reachable, nor is the cat.

I spent some time reading through JavaFX source code and I have no reason to expect that malicious code that cause memory leaks has been applied. There is no reason to believe that the listeners of say a text field’s textProperty works any differently than the char[] of a String.

The JavaFX library (which is included in Java SE since version 7 update 6) provide WeakListener and WeakEventHandler as a necessary tooling for developers when they write listeners or event handlers with a different life-cycle than the target. I think that it is these types and a general lack of knowledge in how Java memory management work that has caused all the confusion. To add even more headache into the mix, literature often speak of JavaFX properties as some kind of a JavaBean superset and therefore imply that JavaFX properties work in a strange and unfamiliar sugar coated way, making developers nervous when using or writing them. However, it is wrong to say that JavaFX properties must be written in one or the other way. How they are written is a convention and not part of a specification. JavaFX properties do no magic.

I wrote a StackOverflow answer to bring some clarity.

Teaching Java EE

I’m scheduled for teaching Java EE at my current workplace and for that (and future assignments) I developed an awesome platform:

github.com/MartinanderssonDotcom/java-ee-concepts

Basically, it is a Maven project with Arquillian setup. The POM file includes two profiles for executing the code on GlassFish and WildFly. This project makes “live coding” almost as easy as the static main method in Java SE environments. Write code, then just execute the file as you would with a regular JUnit test!

The project include packages and test files to demonstrate the possibilities with Arquillian. Currently, pure server-side tests and client-to-server tests running in two different JVM:s is demonstrated. Core Java EE technologies such as Servlet, EJB and JPA with a real database are also demonstrated within these test suites. I like to believe that all my comments provided within the files is sufficient for any Java EE student to get up and running.

If you haven’t a favorite GIT client already, I use and recommend SourceTree which has integrated support for GitHub and Bitbucket. I also use it for other repositories on Gitlab. It works great!

Feel free to contribute and let me know what you think =)

Java’s CountDownLatch JavaDoc is flawed

The first example of usage provided in the CountDownLatch JavaDoc might give you the idea that the example work to synchronize the simultaneous start of multiple threads. At least I thought so until a concurrency test I was working on failed. Closer examination revealed that almost all threads actually missed the start completely. A lot of my concurrency testing involves minimizing phase shifting and do as much preparation as possible in a desperate attempt to avoid parallel execution in theory turning into serial execution in practice. When all is done I seek to maximize the parallelism by a simultaneous start of all worker threads. The CountDownLatch JavaDoc give two properties of their example:

The first is a start signal that prevents any worker from proceeding until the driver is ready for them to proceed.

The second is a completion signal that allows the driver to wait until all workers have completed.

These are the two properties you get. One third property is totally left out:

Only if the administration delay is large enough, do worker threads have time enough to be prepared for the start signal.

..meaning that you’re just lucky if not some of your worker threads, if not all of them, miss the start. Could it be knowing about this third property that made Oracle inject a call to doSomethingElse() before firing the start signal? I understand you might not have seen the problem yet so let me rephrase myself a bit and try my best to be clear.

The driver thread, the thread “administrator”, can be really really quick creating his worker threads. So quick in fact that the operating system hasn’t had the time yet to schedule the worker threads for a first run before the driver thread moves on and fires the start signal. Thus the start is not synchronized among the workers. The start signal prevent workers from starting too early, but it does not prevent workers from starting too late. For a small amount of worker threads, you might never see a problem. But it will become one as soon as the size of the thread pool grows a bit.

The threads will always start too late of course, just like the human runners in a real marathon race. But if you’re up to the task of writing a marathon game where each runner is represented by a thread, wouldn’t you want the start to be as fair as possible? Of course the game design should probably be totally reworked, but I think you get the idea.

The Oracle example never address the issue of threads starting too late, but if a synchronous start is important for you, then their example cannot be applied. As my testing has shown, if the driving thread (the coordinating thread that spawn workers) makes no delay, and the worker threads do (just 10 milliseconds in my test), then all worker threads will miss the start. For a marathon, that would be a disaster. One fix could be to use yet a third CountDownLatch. The third latch will synchronize the driver and make him wait for all workers to become prepared before firing the start signal. Another more clean solution is kind of built on the same idea: Instead of setting the count of the start latch to 1, set it to the amount of worker threads + 1 (the driver) and make all threads including the judge count down the latch. Not until all threads has cooperatively reached the starting line will the race begin. Quite simple really. See the example code here.

I’ve built a smallish test framework for this particular example that also demonstrates both solutions. You can find the source code here. It is a Maven project and you can have it run on your machine within minutes. Enjoy!

How to get the HttpSession object from a ServerEndpoint in Java EE 7

All things web-related in a Java EE container is most likely exposed using a Servlet. The Java EE 7 WebSocket server endpoints are no different. Should one therefore not be able to get hold of the HttpSession object from within a ServerEndpoint class?

Yes it is doable. I exemplified a solution in code as a posted answer over at stackoverflow.com. Enjoy!