Saturday, February 6, 2021

Tomcat9 on Ubuntu 20.04 and getting catalina.out

Blogging after ages for something I haven't found from multiple web searches. Something that might be useful for others

After you've installed tomcat9, do the following:
sudo apt-get install tomcat9

sudo touch /var/log/tomcat9/catalina.out

sudo chown syslog:adm /var/log/tomcat9/catalina.out

sudo usermod -a -G tomcat syslog

sudo chmod 770 /var/log/tomcat9

sudo systemctl restart rsyslog

sudo service tomcat9 restart

Now you'll be able to follow your logs just like previous installations

sudo tail -f /var/log/tomcat9/catalina.out


Thursday, September 1, 2016

Running lh-toolkit 1.x on MySQL 5.7

MySQL 5.7 by default uses sql_mode where no zero in dates or zero dates are allowed. Where as by default the timestamp uses 00:00:00. MySQL 5.7 by default uses ANSI standard GROUP_BY clauses, that were not required for previous versions of MySQL.
The best way to fix this across sessions is to add the following to my.ini or my.cnf depending on your platform. For Ubuntu 16.04 this is located at /etc/mysql/mysql.conf.d/mysqld.cnf
sql_mode=""
In the setup wizard for the URL use:
jdbc:mysql://localhost:3306/@DBNAME@?autoReconnect=true&InnoDB=InnoDB
Basically, the connection needs to have the text InnoDB in it, so that openmrs doesn't try to set the storage_engine variable, which has been changed to default_storage_engine. But that itself is not required because it should be left upto the database implementer to choose the engine. It could be XtraDB (Percona's fork of InnoDB) or Aria (used in MariaDB).
Once these are done, you should be able to use MySQL 5.7 (and its excellent performance improvements) with lh-toolkit or OpenMRS 1.11.x and higher

Monday, April 11, 2016

Representing OpenMRS at FOSS Asia 2016 in Singapore


I missed the OpenMRS worldwide summit 2015 in Singapore because of problems with my visa application. The Singapore ICA rejected my visa application twice, and as it is their policy, they didn't give reasons to why my visa application was rejected. So, I was dejected that I will never be able to make it to OpenMRS summits since the plan is to organize it each year in Singapore.

As the senior manager for education and training programs in OpenMRS, my current goals for this year includes releasing a certification program for developers, implementers and trainers. Singapore, as a tech hub provides a perfect venue to build partnerships around education and training for the Asia and pacific region. That is among the reasons why Asia's premier open-source conference FOSS Asia has hosted its conference in Singapore for 2015 and 2016. I saw this conference as an avenue to forge partnerships, getting the word out and find interested individuals around our training and certification programs. I applied to be a speaker at FOSS Asia 2016 and my topic got selected for a 20 min seminar. With support from the OpenMRS travel grant, I was able to travel to Singapore and thankfully, with the invitation letter from Singapore Science center, my visa application was not rejected this time!! So hopefully it means I can make it to the next OpenMRS worldwide summit too...

At FOSS Asia, I was accompanied by Michael Downey, community director at OpenMRS and Mayank Sharma, the release manager for platform 2.x at OpenMRS. Mayank has an interesting story because it was at FOSS Asia 2015 that he first got to know about OpenMRS and has been a rockstar contributor since. So we were hoping to find a few more like him at FOSS Asia 2016. At the conference, I spoke about the strength of the OpenMRS's bazaar model of software development, the loosely governed community that has scaled over the last decade or so. I have personally been involved in the community since the last 9yrs and I shared some of my experiences. A summary of my talk is part of Episode 4 of OpenMRS Update podcast. You can also view the slides for my talk here.

 
We also started email conversations on partnering for our programs with Singapore's Nanyang Technical University (NTU), who are just starting to build a few health informatics research projects. We also met with General Assembly, a global training organization, but need to take the discussion forward with their Chicago office. So hopefully, this is the beginning of taking our program forward and finding partners around the world who can build capacity for health informatics using OpenMRS.

Thursday, July 10, 2014

OpenVZ node to be used as OpenVPN server

In the past I’ve owned VPS instances so that they can be used as proxies to setup in-country applications. For instance, some country Ministry of IT don’t like their servers managed from another country. So, the quick solution is to buy a cheap OpenVZ node from a provider advertising at lowendbox.com or ask a local IT guy to give me SSH access into his machine and setup OpenVPN server. Someone else wanted to setup the same thing that I do, so I thought I’d write this blog entry.

Server Side

The instructions are for Debian or Ubuntu or Linux Mint or similar based distros

1. Install OpenVPN

# apt-get install openvpn

2. Prepare key generation

# mkdir /etc/openvpn/easy-rsa
# cp /usr/share/doc/openvpn/examples/easy-rsa/2.0/* /etc/openvpn/easy-rsa

3. Editing vars

# cd /etc/openvpn/easy-rsa
# nano vars

Change the variables with whatever info you'd like to create the user info (KEY_SIZE is for the encryption complexity, using 2048 should be more than fine)
KEY_SIZE=2048
KEY_COUNTRY="NO"
KEY_PROVINCE="NO"
KEY_CITY="Oslo"
KEY_ORG="UiO"
KEY_EMAIL="saptarsp@test.in"

# source ./vars

4. Generating the Certificate Authority (CA)

# ./clean-all
# ./build-ca

5. Generating the Server keys - (with server name as dhisServer)

# ./build-key-server dhisServer

6. Generate the Diffie Hellman Key Exchange parameters

# ./build-dh

7. Create a client key (with client name as sunny)

# ./build-key sunny

8. Generate the HMAC code (so, that we can use TLS/SSL login without passwords)

# openvpn --genkey --secret /root/easy-rsa/keys/ta.key

9. Copy the generated keys into a keys folder

# mkdir -p /etc/openvpn/keys
# cp -pv /root/easy-rsa/keys/{ca.{crt,key},dhisServer.{crt,key},ta.key,dh2048.pem} /etc/openvpn/keys/

10. Edit the OpenVPN server configuration. Remove everything and add the following (or make changes)

# nano /etc/openvpn/server.conf
port 1194
proto udp
dev tun

ca keys/ca.crt
cert keys/server.crt
key keys/server.key # This file should be kept secret
dh keys/dh2048.pem

server 10.8.0.0 255.255.255.0

ifconfig-pool-persist ipp.txt

push "redirect-gateway def1 bypass-dhcp" #all clients to redirect their default network gateway through the VPN
push "dhcp-option DNS 208.67.222.222" #OpenDNS servers
push "dhcp-option DNS 208.67.220.220"

keepalive 10 120

tls-auth keys/ta.key 0 # This file is secret

comp-lzo

user nobody
group nogroup

persist-key
persist-tun

status openvpn-status.log
log /var/log/openvpn.log
verb 3

11. Enable IP forwarding on the server

# echo 1 > /proc/sys/net/ipv4/ip_forward

12. Forward all network traffic through NAT masquerade (Change this to eth0 to venet0 for OpenVZ or VPS nodes)

# iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE

13. Restart OpenVPN service

# service openvpn restart


Client Side


On the client side, you don’t have to do much. If you want your entire office to access through this VPN, then you should install ddwrt or another router firmware (Asus N56U Padavan’s) which has an OpenVPN client. Below is a screenshot of Padavan’s firmware OpenVPN client. Note the important extended configuration – redirect-private def1 (All outgoing IP traffic with be redirected through VPN)


OpenVPN-Padavan


If you are using Windows and want to connect, try the OpenVPN-GUI. A pretty simple, but useful client to connect to OpenVPN servers. Remember to down the one with TAP driver, so you can seamlessly get all traffic to flow through the VPN connection. After the installation is done, copy the ca.crt, sunny.crt, sunny.key and ta.key files that were generated on the server in C:\Program Files\OpenVPN\config . You can email or use WinSCP to transfer the files to the client machine. Then create a sunny.ovpn file in the same folder with the following content

# C:\Program Files\OpenVPN\config\sunny.ovpn
client
remote xxx.xxx.xxx.xxx (replace this with your server IP)
port 1194
proto udp
dev tun
dev-type tun
ns-cert-type server
reneg-sec 86400
tls-auth ta.key 1
auth-retry interact
comp-lzo yes
verb 3
ca ca.crt
cert sunny.crt
key sunny.key
management 127.0.0.1 1194
management-hold
management-query-passwords
auth-retry interact

That should be all that is required. Once you start the OpenVPN GUI, you will see a system tray icon with right-click displaying connect or if you have multiple .ovpn files then choice on which one to connect to.

Thursday, September 19, 2013

Excellent service for monitoring servers using Nagios - Monguru

I’ve been looking for ways to monitor servers for sometime. Most services charge way too much to automate monitoring multiple server. One could do it nicely with a second server running Nagios, and a shared Nagios could be used by many people. The guys at Monguru have just done that. If you are into running web servers, you should definitely have heard of Nagios.
The documentation is scant and the installation scripts are distributed across different wiki pages, blog posts etc. Basically to get this working, your server needs to run an SNMP agent that can be polled by Nagios and you have to register at Monguru. They provide you a login to their shared Nagios instance and you can add your server to that instance for free. The dashboard is simplistic and config files are uploaded through a very simple interface. Anyways, what you should do to get your server monitored, is to download the script
$ wget https://raw.github.com/monguru/configuration_scripts/master/add_new_server.sh
$ chmod +x add_new_server.sh
$ sudo ./add_new_server.sh
Then, follow the steps mentioned… like naming your server. The script downloads all the SNMP config files that are needed. It downloads python scripts that will create a username, password that you need to remember. It will create an instance in your monguru website as well that you can use for Twitter notifications or git integration of your config files… It is pretty simple, but cool stuff!!

Saturday, May 25, 2013

Windows Locked files in Jetty Maven Plugin

If you are developing Java webapps using maven on Windows and using jetty:run to deploy, you have surely come across the problem when you cannot save html, js or vm files. JSPs work fine because they are compiled and deployed. Its surely irritating when that happens.

Jetty has documentation to explain the problem. Many web searches will point to this document. The problem is basically with the NIO connector that jetty uses. Others have suggested that using the old BIO connector is easy way to solve this. The problem is that the Jetty documentation solution requires using a hardcoded location for the new webdefaults.xml. We don’t want to make changes to the source of our project, just because we are on Windows.

My suggested solution is to just edit the maven jetty plugin to not use the FileMappedBuffer. Edit the webdefault.xml that is found in the jar file and that’s all. Browse to your maven repo location. e.g. for 6.1.26 of the plugin go here – USERHOME\.m2\repository\org\mortbay\jetty\jetty\6.1.26\jetty-6.1.26.jar . Open the jar file and find the file at org/mortbay/jetty/webapp/webdefault.xml , and edit the file:

<init-param>
<param-name>useFileMappedBuffer</param-name>
<param-value>false</param-value> <!—change from true to false -->
</init-param>

Save the file and replace the jar with this file. Now whenever you run jetty:run, it will use this file and you will not find windows locking the files when jetty deploys it

Thursday, March 21, 2013

Shout or Leave? - Open-source community governance

I’ve often thought that open-source contributions are towards “social good”, but I also realize it is a fairly naïve way to look at the open-source world. I was listening to a friend’s frustration of getting people to work together. She is a social worker and now in a political party is trying to make people work together to do “social good”. Participating closely in 3 fairly large open-source communities and following a few others closely, she asked me how I saw it works in the world of open-source. That’s where I thought it might be good to post my thoughts.

Open-source in its literal definition is just putting your code out. Doesn’t mean anything more. Thoughequality-vs-justice we have associated a few implicit connotations with the concept. Particularly, that there is an open, bazaar-like mode of working, which can be thought of as similar to the concept of democracy. But as we can see from the political conditions in different parts of the world, democracy isn’t one single thing. It is indeed a group of people working together towards a common goal, ideally each person having an equal weight of vote. But as the world is not idealistic, the more pragmatic meritocracy is acceptable. The open-source world looks at meritocracy through a number of aspects like code contributions, advocacy, documentations etc. with the general focus being towards getting work done. Yet, most research and discussion around open-source misses out on the aspect of power, tradition and culture of the communities that political scientists and sociologists have talked about for a long time. Open-source communities like other human networks have a vision of meritocracy and sometimes evangelize this vision, but often find it hard to practice.

Some open-source communities do have a BDFL, while others generally play by the resources rule. Resources include money, people, ideas and the group that possess these are generally considered more powerful. Some companies because of their “cool” products automatically make “cool” suggestions to the community and their work is “cooler” than the average contributor’s work. Because a developer works for a “cool” company, does not necessarily mean that every developer from that company has better skills than your average contributor. Some open-source communities value context-of-use, while others value “de-contextualization”. Many researchers have highlighted that domain-specific open-source software communities are better suited by being contextual. While, this challenge of being contextual and translating the contextual knowledge to a de-contextual developer, is also well studied, it is really not well enacted in domain-specific open-source community governance. Governance relates to decisions that define expectations, grant power, or verify performance. It consists of either a separate process or part of decision-making or leadership processes. Thus, when the next time you read about OSS 2.0, realize that governance plays a vital role in the challenge of domain-specific open-source communities.

Open-source communities are typically expected to work around an open-source license, some code of conduct pages and roles of developers. These alone, as we see from functioning democracies is fairly inadequate – judiciary, legislature, and executive. Media is often considered the 4th pillar of democracy. A vehicle that allows voices to reflect on how the other 3-pillars are doing. Good governance often comes from the fact that reflective voices are heard, understood and acted upon.

Yet, power plays an important role in sustainability or growth of a community. As an independent contributor (just as a citizen in democracy), one can either look at the power play, raise voice so that others see it or get fed up and leave.

Wednesday, March 6, 2013

You Aint Virtualized Till You’ve Used Archipel

I’ve setup a few virtualized environments starting from the good old Xen in 2004. Good web-based, remote management of the VMs has been a sore point for me, since you needed to have some Gtk or Qt app to do all the VM management stuff. Not that the desktop virtual machine management isn’t robust, but its just that when you are travelling and you just want to restart the VM quickly, a web interface does the work quickly.

Another thing about VM management is being able to look at resources in real-time use. There are people out there who love the command-line stuff, but I like a GUI for real-time resource management. Are there too many simultaneous users, high-latency requests, reporting occupying too much CPU? So SSH into a server through the command-line just doesn’t cut it for me.

I recently discovered the Archipel project, when trying to setup a virtualized environment for an NGO without system admin, who don’t need to know qemu, libvirt etc. The goal is that in a few clicks you’d have a virtual machine ready to be used. Another click to restart a VM. Another click to clone an existing VM. Increase or decrease VM memory or CPU cores etc. by moving some sliders. Isn’t that what linode or Amazon EC2 offers you ask?… But I have my own server in a local datacenter, which turns out to be much more ROI-effective and performance effective than those providers in the long-term.

Archipel does all of the above and much more. It is an excellent XMPP-based VM orchestration tool:

Archipel is a solution to manage and supervise virtual machines. No matter if you have a few locally on your computer or thousands through data centers, Archipel is a central solution to manage them all. You can perform all basic virtualization commands and many other things like live migration, VMCasts, packages, etc.

All you have to do is setup eJabberd-based XMPP server, make some configuration like the qemu host and it will find all the VMs from your list. You can even manage multiple hosts with multiple VMs from one eJabberd server. That’s not all. Most of the commands are like chatting to a bot and then it runs commands on libvirt. How cool is that?!? Being able to chat with your Hypervisor!!

On the client-side, you have to install a set of webpages on Apache and this can be on the same host as eJabberd or separate. The pages on this client-side app uses websockets or BOSH and has a nice looking UI. This allows real-time view of the virtual machines and the hosts. I also some the built-in VNC client that uses only JavaScript. So you do not have to install any client on the local machine. It all runs from the web browser. There is some lag, but if you’ve got a good machine and a browser with good internet connection, it works quite well.

There are some bugs in the client app that keep showing up, but all in all this is an excellent system. Virtual machine management cannot be easier than this… This is indeed the future of virtual machine orchestration.

Saturday, March 2, 2013

Alter Table for column with Foreign key in MySQL 5.6 Fails

Oracle released the much awaited MySQL 5.6 GA on 5th Feb, 2013. Much to everyone’s surprise and mysqlchanging direction in some sense, lots of improvements were made available in the Community release of MySQL, which were expected to be only part of the Enterprise Edition only.

Eager to try out the new NoSQL and performance improvements in 5.6, I downloaded the new installer. It is a packaged installer than unpacks and installs connectors, workbench and few other things along with the MySQL 5.6 Server. A surprising place where I got stuck was trying to install OpenMRS. The liquibase changeset uses <modifyType> tag and attempts to change the varchar column size. This works well under MySQL 5.5, but fails in 5.6.

While I’ve tried searching for this change in the release notes, what’s new and few other places, I haven’t found this mentioned clearly for the MySQL 5.6 release. The problem is that earlier you could disable the foreign key constraints check, modify the columns that have the constraints and re-enable the foreign key checks. If you changed the columns on both ends fine, things would just work well. But in 5.6 it seems there has been a change to this and the only mention I’ve found is new error messages that the server can throw. There is probably some tighten of things around the constraints management, but I couldn’t find much.

Here are the server error messages from MySQL 5.6 and MySQL 5.5:

http://dev.mysql.com/doc/refman/5.6/en/error-messages-server.html#error_er_fk_column_cannot_change
which wasn't there in:
http://dev.mysql.com/doc/refman/5.5/en/error-messages-server.html

Thursday, February 14, 2013

Opera to use Webkit engine

Update: Brendan Eich made an interesting post about the fighting the monoculture and the web needs diversity. But I feel Gecko needs to innovate faster to remain useful to the web. Servo is coming too late and platform acceleration is moving slowly. NPAPI/PPAPI is too slow etc. etc.

The news that Opera will be abandoning its Presto engine and moving to Webkit isn’t so much about shock, but is more disappointment for me. I have been a user of Opera for at least a decade now. Although I use Chrome and Firefox for a many things, Opera has remained installed and upgraded because every new release has something innovative in it.

Presto is a nice, lightweight rendering engine, where even with 50+ tabs open, the browser continues to work smooth. Pages scroll fine and they all the tabs open up quickly when you restart the browser. I have a habit of keeping tabs open for pages that I need to go to. Bookmarks don’t just cut it for me. An open tab to me is a reminder of what needs to be done. With Chrome, Firefox or Safari, staying with many tabs is a pain. Crashes are common with those browsers and the system memory usage is somewhat exponential. I don’t know how much of that can be attributed the the layout engine, but Opera does handle it with ease. All Opera users know this and they probably feel a disappointed that future versions of Opera might not be the same.

In some sense everyone agrees to the dominant Webkit position. More so as the world moves to mobile devices, Webkit is the standard layout engine from iPhone, Android to Blackberry. What differentiates the 300 million Opera users to continue using it will be interesting to watch. I’m probably not upgrading Opera to the next release, but if I really wanted to use something I’ve grown up with, they say IE10 also grew up!!