Saturday, November 28, 2015

I Took the Red Pill; Setting Up Development Environment to Contribute Wildfly (1)

I have been working with JBoss EAP and Wildfly  for almost 2 years and the experiences is quite interesting. Inside JBoss EAP it uses Wildfly as its core JEE server. What I like most in Wildfly  are its easy configuration and management interface, modular design and indeed fast startup.


There were times I had to extensively investigate class loading mechanisms and dependency management with related to Wildfly, which gave me sort of a high confidence about its runtime environment. Its just crazy and made me to dig deep. Here I’m explaining steps required to setup your development environment to start Wildfly contribution.

Prerequisites,
  • Java 8
  • Mavan 3
  • GitHub account 

Setting up a personal repository to work,

1. Fork Wildfly repository into your personal account.
2. Clone your newly forked repository to local workspace.  

$ cd wildfly

3. Add a remote reference to upstream, for pulling future updates .

$ git remote add upstream git://github.com/wildfly/wildfly.git

4. Disable merge commits to your master.

$ git config branch.master.mergeoptions –ff-only

Build the Source Code,

Building WildFly 9 requires Java 8 or newer, make sure you have JAVA_HOME set to point to the JDK8 installation. Build uses Maven 3.

1. Run build.sh script.

$ ./build.sh

but,
Basically all you do is run

$ mvn clean install

or if your don’t want to wait for default testsuites you can do

$ mvn install -Dmaven.test.skip=true

Hakuna matata!


Launch Wildfly,

Built Wildfly zip file can be find in [repository_location]\build\target\

1. Navigate to above mentioned directory.

$ cd build\target\wildfly-10.0.0.CR5-SNAPSHOT\bin

Wildfly supports comes with two operation modes; standalone and domain.
To run Wildfly in standalone mode,

2. Execute standalone.sh script.

$ ./standalone.sh



From another post I’ll explain how to load Wildfly source to IntelliJ Idea and start debugging.



Monday, November 16, 2015

Machine Learning for Fluid Rendering in Video Games


Data-driven Fluid Simulations using Regression Forests from SoHyeon Jeong on Vimeo.

Water is one of elements that 3D animators put a lot of effort to get the perfection. It is really difficult to simulate dynamic movements of millions of particles and it requires huge amount of computational resources, specially for real time video rendering in 3D games.

However a group of researchers from Swiss Federal University of Technology and Disney Research has published a paper that shines some light on above problem with the aid of Machine Learning.

Rather than calculating the movements of each particle or reducing the particle count, this method could predict the dynamic movements of particles accurately. Presented algorithm needs to be trained with a random set of videos which have animated fluid particles with accurate calculations. Furthermore, the algorithm doesn't calculate movements in real-time, rather it predicts according to prior knowledge.

According to researches, this algorithm can render fluid animations 3 times faster than existing methods and can animate nearly 2 million particles in real-time.

Sunday, October 25, 2015

Puppet; the Continuous Delivery Tool

 Puppet; a tool that supports to automate application deployment.  Puppet enable you to practice continuous delivery. In this post I provide an an overview of Puppet Open Source continuous delivery tool, and outline it's necessary configurations and installations instructions specific to a Linux CentOS environment with recommended best practices. At the end of this post I have mentioned how to deploy a war file to JBoss Wildfly via it’s command line  tool.

Puppet is an automation software for IT system administrators and consultants. It allows you to automate repetitive tasks such as the installation of applications and services, patch management, and deployments. Configuration for all resources are stored in so called "manifests", that can be applied to multiple machines or just a single server. 

Puppet Open Source Tool have two major components; Puppet Master and Puppet Agent. Those are intended to host in two separate locations where Puppet Master keeps all manifest scripts related to deployment automation while puppet agent's are intended to frequently (in every 30mins of time) communicate with Puppet Masters to detect any updates to configurations and deployment artifacts, and pull them to agent's environment to finish the deployment.



Puppet Master is responsible for keeping agent specific deployment scripts while Puppet Agent is responsible for accessing Puppet Master and automate the deployment. First of all, Puppet Master's 8140 port must be enable to access via Puppet Agent and also both Puppet Master and Puppet Agent hosted servers needs to have their FQDNs registered with a DNS.

Master Configuration

On CentOS/RHEL 6, where iptables is used as firewall, add following line into section ":OUTPUT ACCEPT" of /etc/sysconfig/iptables.

#vim /etc/sysconfig/iptables

Add the following line to iptables to open port 8140.

-A INPUT -m state --state NEW -m tcp -p tcp --dport 8140 -j ACCEPT

Close the file after saving it.

Restart the iptables service.

# service iptables restart

Open hosts file to add FQDN of Puppet Master.

#vim /etc/hosts

Add FQDNs to the file.

10.101.15.190 nexus-jenkins.abc.lk
10.101.15.197 dev-179.abc.lk

Close the file after saving it.

Agent Configuration

Puppet client nodes have to know where the Puppet master server is located. The best practice for this is to use a DNS server, where Puppet domain name can be configured. If a DNS server is not available, /etc/hosts file can be modified as follows.

#vim /etc/hosts

Add FQDN of Puppet Master to the file.

10.101.15.197 nexus-jenkins.abc.lk

Close the file after saving it.

Installing Puppet Master

Since Puppet is not in basic CentOS or RHEL distribution repositories, add a custom repository provided by Puppet Labs

# rpm -ivh https://yum.puppetlabs.com/el/6.5/products/x86_64/puppetlabs-release-6-10.noarch.rpm

Install the "puppet-server" module in master server.

#   yum install puppet-server

When the installation is done, set the Puppet server to automatically start on boot and turn it on.

#   chkconfig puppetmaster on
#   service puppetmaster start


Installing Puppet Client

Since Puppet is not in basic CentOS or RHEL distribution repositories, add a custom repository provided by Puppet Labs

# rpm -ivh https://yum.puppetlabs.com/el/6.5/products/x86_64/puppetlabs-release-6-10.noarch.rpm

Install the puppet agent service in agent server.

#   yum install puppet

When the installation is done, set the Puppet server to automatically start on boot and turn it on.

#   yum chkconfig puppet on

Specify the Puppet master servers FQDN in /etc/sysconfig/puppet file.

#   vim /etc/sysconfig/puppet

Add the following line to specify the FQDN of the puppet master.

PUPPET_SERVER=nexus-jenkins.abc.lk

The master server name also has to be defined in the section of agent's puppet configuration file.

# vim /etc/puppet/puppet.conf

Add the following line to specify the master server.

server=nexus-jenkins.abc.lk

Start the puppet client.

# service puppet start

Certificate Verification

Execute the below command in puppet agent to generate a certificate request.

# puppet agent --test

Following error message will be appear in the terminal.

Exiting; no certificate found and waitforcert is disabled

Go back to puppet master server and list all certificate requests by executing the following command.

#   puppet cert list

Sign the certificate by executing the following command in puppet master's terminal.

#   puppet cert sign dev-86.abc.lk

Note: puppet agent's FQDN

Deployment Orchestration

For deployment automations, make sure site.pp file exist in /etc/puppet/manifests directory.

Following instructions are targeted to be placed in Puppet-Master node.

Create the following directory structure using mkdir command.
/etc/puppet/modules/[project_name]/files/
Example:
/etc/puppet/modules/xyz/files/

Open the /etc/puppet/manifests/site.pp file to configure the deployment plan.
# vim /etc/ puppet/manifests/site.pp

Add the following content to the file.

node 'pqr.abc.lk' {
                file { "/tmp/xyz/portal.war":
                                ensure => 'present',
                                mode => 0755,
                                owner => abc,
                                group => abc,
                                source => "puppet:///modules/xyz/portal.war"
                }
                exec { "deploy_portal":
                                command => "/home/abc/wildfly/bin/jboss-cli.sh --connect --command=\"deploy --force /tmp/xyz/portal.war\" "
                }
}


 References:
Installing Puppet: Red Hat Enterprise Linux (and Derivatives) — Documentation — Puppet Labs. 2015. Installing Puppet: Red Hat Enterprise Linux (and Derivatives) — Documentation — Puppet Labs. [ONLINE] Available at: https://docs.puppetlabs.com/guides/install_puppet/install_el.html. [Accessed 05 October 2015].

Installing Puppet: Post-Install Tasks — Documentation — Puppet Labs. 2015. Installing Puppet: Post-Install Tasks — Documentation — Puppet Labs. [ONLINE] Available at: https://docs.puppetlabs.com/guides/install_puppet/post_install.html. [Accessed 05 October 2015].

Language: Node Definitions — Documentation — Puppet Labs. 2015. Language: Node Definitions — Documentation — Puppet Labs. [ONLINE] Available at: https://docs.puppetlabs.com/puppet/3.8/reference/lang_node_definitions.html. [Accessed 05 October 2015].

How to install Puppet server and client on CentOS and RHEL - Xmodulo. 2015. How to install Puppet server and client on CentOS and RHEL - Xmodulo. [ONLINE] Available at: http://xmodulo.com/install-puppet-server-client-centos-rhel.html. [Accessed 05 October 2015].

Saturday, October 24, 2015

Continuous Delivery

Continuous Delivery; A quite new term I found very recently. I was working on a deployment automation task few weeks ago and here I summarize what I learnt about this practice and how it is beneficial to developers.

Continuous deployment is deploying every change that passes automated tests to production; simply it is the practice of releasing every good build to user. It is all about putting the release schedule in the hands of the business rather than in the hands of development team.

Introducing continuous delivery to a project means making sure the software is always production ready throughout its entire lifecycle and ability to interchange among release versions using a fully automated process in a matter of seconds or minutes.

According to Martin Fowler, continuous delivery is when,
  • Software is deployable throughout its lifecycle.
  • Team prioritizes keeping the software deployable over working on features.
  • Anybody can get fast, automated feedback on the production readiness of their systems any time somebody makes a change to them.
  • Perform push-button deployments of any version of the software to any environment on demand.

Incorporation of automation, frequent code releases, testing at every stage of the process, and a pull-based architecture that permits only successful releases to move to the next stage reduces errors and make it easier to improve the software delivery process.

Automation allows making successful processes repeatable. When introducing a new feature, make a change to a service underlying system or infrastructure, automation let us make the change quickly and safely without introducing errors that would result from repeating the process manually.

Releasing code frequently rather than releasing big releases once or twice means testing the product more often. There’s less change in each release, so it’s easier to isolate and fix problems. It’s also easier to roll back when needed.

Pull based architecture prevents passing code that fails automated tests to the next stage of development. This prevents errors propagating and making them harder to diagnose.

Software developers are rewarded for delivering quality software that addresses business needs, on schedule. Continuous delivery practices give software developers the ability to provision themselves with production like environment and automated deployment so they can run automated tests. Instead of standing in their way, the operations team helps developers get their work done. Continuous delivery depends on continuous integration, which means every change is merged into and tested against the main code base, reducing the opportunity for long-standing feature branches and large merge windows that can lead to serious errors. Deployment becomes much less stressful when changes are small and tested at every step.



Above video contains a speech given by Martin Fowler about Continuous Delivery.

Solving "org.jboss.as.cli.CliInitializationException: Failed to connect to the controller" Exception

I had a requirement of deploying an application to a JBoss Wildfly server via its command line tool, which is known as "jboss-cli". I executed below command to deploy the war file, which gave me a "CliInitializationException".
jboss-cli.bat --connect --command="deploy --force E:\Projects\CD&CI\portal.war"

The exception thrown from the jboss-cli tool:

org.jboss.as.cli.CliInitializationException: Failed to connect to the controller
        at org.jboss.as.cli.impl.CliLauncher.initCommandContext(CliLauncher.java:278)
        at org.jboss.as.cli.impl.CliLauncher.main(CliLauncher.java:253)
        at org.jboss.as.cli.CommandLineMain.main(CommandLineMain.java:34)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.jboss.modules.Module.run(Module.java:292)
        at org.jboss.modules.Main.main(Main.java:455)
Caused by: org.jboss.as.cli.CommandLineException: The controller is not available at localhost:9990
        at org.jboss.as.cli.impl.CommandContextImpl.tryConnection(CommandContextImpl.java:1020)
        at org.jboss.as.cli.impl.CommandContextImpl.connectController(CommandContextImpl.java:832)
        at org.jboss.as.cli.impl.CommandContextImpl.connectController(CommandContextImpl.java:811)
        at org.jboss.as.cli.impl.CliLauncher.initCommandContext(CliLauncher.java:276)
        ... 8 more
Caused by: java.io.IOException: java.net.ConnectException: JBAS012144: Could not connect to http-remoting://localhost:9990. The connection timed out
        at org.jboss.as.controller.client.impl.AbstractModelControllerClient.executeForResult(AbstractModelControllerClient.java:129)
        at org.jboss.as.controller.client.impl.AbstractModelControllerClient.execute(AbstractModelControllerClient.java:71)
        at org.jboss.as.cli.impl.CommandContextImpl.tryConnection(CommandContextImpl.java:997)
        ... 11 more
Caused by: java.net.ConnectException: JBAS012144: Could not connect to http-remoting://localhost:9990. The connection timed out
        at org.jboss.as.protocol.ProtocolConnectionUtils.connectSync(ProtocolConnectionUtils.java:120)
        at org.jboss.as.protocol.ProtocolConnectionManager$EstablishingConnection.connect(ProtocolConnectionManager.java:256)
        at org.jboss.as.protocol.ProtocolConnectionManager.connect(ProtocolConnectionManager.java:70)
        at org.jboss.as.protocol.mgmt.FutureManagementChannel$Establishing.getChannel(FutureManagementChannel.java:204)
        at org.jboss.as.cli.impl.CLIModelControllerClient.getOrCreateChannel(CLIModelControllerClient.java:169)
        at org.jboss.as.cli.impl.CLIModelControllerClient$2.getChannel(CLIModelControllerClient.java:129)
        at org.jboss.as.protocol.mgmt.ManagementChannelHandler.executeRequest(ManagementChannelHandler.java:117)
        at org.jboss.as.protocol.mgmt.ManagementChannelHandler.executeRequest(ManagementChannelHandler.java:92)
        at org.jboss.as.controller.client.impl.AbstractModelControllerClient.executeRequest(AbstractModelControllerClient.java:236)
        at org.jboss.as.controller.client.impl.AbstractModelControllerClient.execute(AbstractModelControllerClient.java:141)
        at org.jboss.as.controller.client.impl.AbstractModelControllerClient.executeForResult(AbstractModelControllerClient.java:127)
        ... 13 more

Problem is, I was using a native-interface instead of a http-interface so it won't use the correct protocol.

Solution is simple, I had to change the standalone-full.xml file as below to specify the correct interface.

<management-interfaces>
    <!--<http-interface security-realm="ManagementRealm" http-upgrade-enabled="true">
        <socket-binding http="management-http" />
    </http-interface>-->
    <http-interface security-realm="ManagementRealm" http-upgrade-enabled="true">
        <socket interface="management" port="${jboss.management.http.port:9990}" secure-port="${jboss.management.http.port:9993}"/>
    </http-interface>
</management-interfaces>

So that, jboss-cli tool will select the correct protocol in order to initialize it's execution.

Monday, September 14, 2015

Configuring Multiple Transaction Managers with @Transactional Annotation

Most of the time Spring applications are working with a single transaction manager, but in some cases applications may have to work with multiple transaction managers.

Suppose an application have to access two databases, "ApplicationDB" and "ApplicatioinDataDB". In here the application should contain two separate data sources and each data source should be accessible via two independent transaction mangers.



In this case, optional value attribute in @Transactional annotation can be used to specify the PlatFormTransactionManager to be used. The value can either be bean name or qualifier value of the transaction manager bean.

Qualifier annotation can be used as follows:

public class ApplicationService {
  
    @Transactional("txGeneral")
    public void setApplicationMetaData(String name) { ... }
  
    @Transactional("txData")
    public void setAppData() { ... }
  }

Transaction manager beans can be defined as follows in the application context:

<tx:annotation-driven/>

  <bean id="transactionManager1" class="org.springframework.jdbc.DataSourceTransactionManager">
    ...
    <qualifier value="txGeneral"/>
  </bean>

  <bean id="transactionManager2" class="org.springframework.jdbc.DataSourceTransactionManager">
    ...
    <qualifier value="txData"/>
  </bean> 

In here both transactioin methods will run under separate transaction managers in separate sessions. The default <tx:annotation-driven> target bean name transactionManager will still be used if no specifically qualified PlatformTransactionManager bean is found.

If the application have to perform cross database transactions, it is better to stick with JTA rather than having separate transaction mangers, because rollback propagation in one transaction running under a transaction manger will not rollback other transaction manager's transaction. Therefore cross database transactions will not be atomic. More about this later....

Saturday, July 25, 2015

Write logs to different outputs according to log level

In some cases we have to write logs to different outputs according to log level. It can be easily configured by specifying the filter of certain appenders. The most easiest way of specifying appenders is having a properties configuration file, but properties configuration files doesn’t support log level filters. Therefore it is necessary to stick with XML to specify appenders.

Log4J supports different log levels. These log levels can be used in different events as mentioned in table 1.

#
Level
Summary
1
ALL
ALL has the lowest rank and it is intended to turn on all logging.
2
DEBUG
DEBUG level is useful to developers to indentify events in an informational manner and allows to debug the application.
3
ERROR
ERROR level specifies events that must immediately investigated but which allows the application to continue its functionality.
4
FATAL
FATAL level can be used to log terrible errors that can cause application to abort its operations.
5
INFO
INFO level is useful to mention informative messages after important process has finished.
6
OFF
OFF has the highest rand and it is intended to turn off all logging.
7
TRACE
TRACE level is used to log events in a very informative manner. Basically TRACE logs are used in development and eventually remove from production deployments.
8
WARN
WARN level can be used to log potentially harmful situations.

Basically when a low priority log level is given, messages with higher priority for the given log level are also logged. In that case log level range filter (LevelRangeFilter) can be used to reject messages with priorities outside a certain range. The range can be configured by specifying the values for LevelMin, LevelMax, and AcceptOnMatch properties.

A sample of log4j XML file with log level filters created by a colleague is added below.


References:
  • "Logging Services." Log4J. Apache Software Foundation, n.d. Web.
  • "Log4j Architecture." Log4J. Apache Software Foundation, n.d. Web.

Wednesday, May 6, 2015

JBoss EPA 6.1 : javax.persistence.PersistenceException PersistenceProvider in org.apache.openjpa.persistence.PersistenceProviderImpl not found

This is an issue I was having few weeks back and solved by creating a module inside the JBoss EPA. The issue came because of missing a class which should be available inside the EPA, and raised the above exception during the deployment. I’m still hesitating about the root cause, whether this requirement came with a Spring dependency I had or due to MS SQL Server driver I used. Somehow I managed to solve this issue by creating an OpenJPA module inside the JBOSS_HOME>\modules\org\apache\openjpa directory.
First of all we need to download the OpenJPA from this link. Then extract the content to above mentioned directory and the create the module.xml file as follows.

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.1" name="org.jboss.as.jpa.openjpa">
    <properties>
        <property name="jboss.api" value="private"/>
    </properties>
    <resources>
        <resource-root path="jboss-as-jpa-openjpa-7.1.1.Final.jar"/>
    </resources>
    <dependencies>
        <module name="javax.annotation.api"/>
        <module name="javax.persistence.api"/>
        <module name="javax.transaction.api"/>
        <module name="org.jboss.as.jpa.spi"/>
        <module name="org.jboss.logging"/>
        <module name="org.jboss.jandex"/>
        <module name="org.apache.openjpa" optional="true"/>
    </dependencies>

</module>

Tuesday, May 5, 2015

Java Messaging Service (JMS) - I

JMS is a collection of interfaces that define the specifications of messaging clients to use when there are communicating with messaging systems. It is much similar to the way JDBC abstract relational database access.
JMS API supports three major types of messaging systems. Those are General messaging applications, Point-to-Point applications and Publish-Subscribe applications. In order to build a general messaging application, there are few components/interfaces that the application should adhere to its implementation,
  • ConnectionFactory
  • Destination
  • Connection
  • Session
  • Message
  • MessageProducer
  • MessageConsumer

According to JMS specifications, both ConnectionFactory and Destination should be obtained through JNDI. Other objects can create with JMS vendor implementations. The sequence is like as follows, once we have a ConnectionFactory, we can create a Connection. With a Connection, we can create a Session. Once we have a Session, we can create a Message, MessageProducer or a MessageConsumer.
Overview of a general JMS application.


Here below I have attached a github link to a demo application that demonstrates the above connectivity.

GitHub repo: https://github.com/rootpox/JBoss-A-MQ-MessagingQueue

Image reference: Mark Richards, 2009. Java Message Service. Second Edition Edition. O'Reilly Media.