Tuesday, October 10, 2017

How do I avoid the error "Unable to validate the following destination configurations" when using S3 event notifications in CloudFormation?

There is a very important post about avoiding the "Unable to validate the following destination configurations" in AWS.
Too bad it's not mentioned just next to both S3 and SNS/SQS reference documentation.

BUT! This post is lacking some important part: You will get this error even if you didn't specify the TopicPolicy (or QueuePolicy) at all!
Furthermore, you will get this error even if you specified the policy, but it's not correct.
For example, if your policy is too restrictive and S3 would not be able to send events to SNS, you will also get this error! Is it clear from error's description? Not really. Is it clear from the above's AWS post? No, not at all.

So just remember, when you see "Unable to validate the following destination configurations" - check the Policy. It may be lacking. It may be too permissive or too restrictive, but the problem is with the policy, and not with a bucket.

Wednesday, October 4, 2017

CloudFormation Tips

Some tips of using the CloudFormation:

1. Don't specify a resource name, unless absolutely must doing so. This way you can avoid names clashes, since CloudFormation will automatically assign unique names to your resources.
2. If you need to specify a name, include the stack name in it. This way you will reduce the potential name clashes. You can also include a partition and region for resources that are available globally (e.g. S3 bucket names). Note that this will NOT prevent the potential naming clash completely, since somebody else can also use the same name.
3. When creating any IAM resources in your stack, make sure to add DependOn in the resources that use these IAM resources. Apparently CloudFormation is not smart enough to resolve this dependency tree and handle it without additional configuration.
4. Sometimes the names the CloudFormation will give to your resources is completely unrelated to the stack name. Include the ARN of such resources in the Outputs, so you can easily find them later, when needed.
5. Very common scenario in AWS is a S3 bucket that fires events to SNS or SQS, when a file is uploaded. Apparently it's impossible to create it in single change. See this post.

See also:

Sunday, April 10, 2016

Print Gradle Dependencies

One way to print the project's Gradle dependencies is 'gradle dependencyReport'.
However it creates a very large file with many scopes that are sometimes hard to track.
Sometimes it can be useful just to print the list of dependencies of a specific scope.
A very small script can do the job, and here are some examples:

Monday, November 23, 2015

Dropwizard: Add thread name to log

This should have been trivial, but somehow it isn't.
So I'll put it here.
Adding the thread name to Dropwizard logs:

Unix Shell: Use of functions to create complicated aliases

In *nix shells it sometimes useful to create aliases that receive parameters.
This can be done using functions:

Now you can type just something like: kssh or pssh

Sunday, January 18, 2015

Pretty Format of JSON in vim

Open ~/.vimrc
Add the following line:
command Json :%!python -m json.tool

To format Json, type :Json
Note: you need Python installed.

Inspired by this post.

Wednesday, May 14, 2014

Monitor Java on Unix/Linux/Solaris

Just a short memo of useful commands that can help to troubleshoot Java on *nix:
(Btw, most of them will also work on Windows, but who runs Java on Windows? Just kidding...)

jps -m - show running Java processes and their pids.
jstack <pid> - print thread dump
jmap -dump:format=b,file=<path to file> <pid> - save heap dump to file

Thursday, August 1, 2013

UnresolvedAddressException Tip

Getting java.nio.channels.UnresolvedAddressException?
Having no idea why does this happen?

Check the code that creates the address. Did you use java.net.InetSocketAddress.createUnresolved(String, int) to create it?
Do NOT! Just new java.net.InetSocketAddress and it should be fixed.

P.S. This is kind of a post I write here after spending hours on a stupid bug.
So people can google it out and spend less time on it.

Tuesday, May 21, 2013

DevOps: Making Fast Deployments of Java Servers using Maven and Nexus

A Warning: this post is theoretical. I have never tried something like this yet. Maybe I will try it in the future. But currently it's just a nice idea.
In addition, if you know about somebody who works in a similar way, I would really like to know. So please comment!

If you provide a SAAS service you probably have multiple Java servers running in some sort of a cluster. If your SAAS solution is complicated, and if your solution is multi-tier, you should have multiple servers types. And now comes a question: How to make quick deployments to the production?

The common solution suggests that you build a package and release it. It might be a war, or a zip, or a rpm if you are running on Linux.
Once released, you upload the package to the server, unzip/copy it to the relevant folder and restart the server.

The problem with this solution might be if your packages are large. (And if you are using OSGi, your packages are usually very large!) So the upload itself takes time. It also uses traffic which might become expensive if you perform a lot of deployments. And the really funny thing is that most of the upload is redundant: most of the jars in your package are third parties that do not change between the deployments at all!

The common solution suggests pre-uploading the third party jars to the server and exclude them from the package. I've seen such a solutions and in my opinion they are the exact opposite of a good solution: in this way you split the package, the third parties become manages in two (sometimes more) places and each deployment involves at least additional (probably manual!) step of checking if the third parties were changed and if additional deployment if third parties is required.

But if you use Maven. And if you upload your released packages to Nexus (or actually any other Maven repository). This Nexus repository contains all the third parties, all the released packages and the most important: The pom file that was used to build your project!
If you download this pom file, you will be able to build the package on the production server! Pay attention that you don't need to do the full build that includes the compilation, testing and so on. You just need your package, so considering that you deploy a war, you only need to run the "mvn war:war" (Once again: I never tried it myself and the actual execution might be more complex, but I think that the idea is clear).
Sometimes, if you a running a java application with a main class (pure old java and not some kind of JEE inside the application server or a servlet container), you don't even need a package, you just need a correct classpath and Maven will be happy to assist you: mvn dependency:build-classpath.

So I guess that the idea is clear now. Each time Maven will download only the relevant jars and save them to the local repository. The dependencies are managed in the same pom file that is used to release the application, so when making a package, or creating a classpath on a production machine, the exact same dependencies will be used.
And the deployment process will become much faster!

I know that this idea is somewhat different from the usual process. Instead of doing some like "build, deploy, run", we do something that might look even more complicated: "build, deploy descriptor only, package, run". But this should be much faster. So I definitely think that this idea is worth trying.

P.S. The idea described in this post relates only to the package itself: building, packaging and running. The deployment may contains additional steps like changing the local configuration files and so on. These steps are not covered here as they are usually not covered in a build process, but part of release notes. The possible solution can be deploying the relevant scripts to Nexus repository and somehow describe them in a pom file. When downloading the pom, the relevant scripts will be also downloaded and executed.

P.P.S. The idea also doesn't cover the tool that makes the whole process. Although it describes that the tool is using Maven, it says nothing about the actual implementation. It might be a java process. Or a shell script. Or even Ant.

P.P.P.S. Notice that downloading files from Nexus using Maven makes important checks for you, for example it makes an integrity check, which is very important in case of a bad network between the Nexus with releases and a production site.
In addition, you can make some optimizations on Nexus. For example, if you have several production sites all over the world, each site may have a Nexus pointing to the main release repository and caching it. This will make the deployments even faster.

Recommended Reading

Wednesday, February 13, 2013

Deadlock in Jetty or Be Careful while Synchronizing

About nine months ago I reported a bug to the Jetty community that session timeout doesn't work properly. The bug was fixed quite quickly, but nine months later I have discovered that the fix leads to a deadlock in some scenarios.

Deadlock in Jetty illustrates an interesting coding guideline that you must follow while writing your code.

So what happened in Jetty?

Consider a class A that carries state and lives in a multi-threaded environment. Obviously this class must be synchronized.
Consider that you can subscribe to events of class A. So let's say class you must implement an interface I that will be notified when something important in class A happens.
Let's assume that the method in which class A invokes instances of I is synchronized (A carries state, remember?)
Let's also assume that your implementation of I also carries state and must be synchronized as well.

And now let's see what happens:

Thread-1: Some event on A occurs. It's wants to notify I, but first acquires LOCK_A and then invokes method of I. Method of I tries to change state of I, so it tries to acquire LOCK_I, but it was already acquired by Thread-2.

Thread-2: Runs on I. It changes the state of I, so it acquires the LOCK_I. During the change it needs some information of A. It tries to get it, but LOCK_A was already acquired by class A.

And here we have a deadlock.

So what is wrong here?
The most wrong part is of class A: it invokes method of some other class while it is locked. BAD! Finish the lock before calling someone else! And when I say "someone else" I include the other methods of the class! (What really happened in Jetty is that in class A method f1() was synchronized. Method f1() called to f2(), which called to f3(), which called to f4(), which called to I. It was clear in f4() that no synchronization is needed. But the mistake is actually in f1()!)
So you have some member to change? acquire the lock, change them and release the lock.

In addition, the situation could be a little improved if Read-Write lock was used instead of synchronized: most of the access to class are to read data. May be if LOCK_A was split to READ_LOCK_A and WRITE_LOCK_A; and LOCK_I was split to READ_LOCK_I and WRITE_LOCK_I, it was not causing the deadlock. But this is not about preventing the situation, but about improving.


The main point of my post is that when synchronizing, find the critical section and synchronize it only! Do not call other methods (even if they are of the same class) from the critical section: gather all information before and notify everyone else after.