Thursday, September 19, 2013

Excellent service for monitoring servers using Nagios - Monguru

I’ve been looking for ways to monitor servers for sometime. Most services charge way too much to automate monitoring multiple server. One could do it nicely with a second server running Nagios, and a shared Nagios could be used by many people. The guys at Monguru have just done that. If you are into running web servers, you should definitely have heard of Nagios.
The documentation is scant and the installation scripts are distributed across different wiki pages, blog posts etc. Basically to get this working, your server needs to run an SNMP agent that can be polled by Nagios and you have to register at Monguru. They provide you a login to their shared Nagios instance and you can add your server to that instance for free. The dashboard is simplistic and config files are uploaded through a very simple interface. Anyways, what you should do to get your server monitored, is to download the script
$ wget https://raw.github.com/monguru/configuration_scripts/master/add_new_server.sh
$ chmod +x add_new_server.sh
$ sudo ./add_new_server.sh
Then, follow the steps mentioned… like naming your server. The script downloads all the SNMP config files that are needed. It downloads python scripts that will create a username, password that you need to remember. It will create an instance in your monguru website as well that you can use for Twitter notifications or git integration of your config files… It is pretty simple, but cool stuff!!

Saturday, May 25, 2013

Windows Locked files in Jetty Maven Plugin

If you are developing Java webapps using maven on Windows and using jetty:run to deploy, you have surely come across the problem when you cannot save html, js or vm files. JSPs work fine because they are compiled and deployed. Its surely irritating when that happens.

Jetty has documentation to explain the problem. Many web searches will point to this document. The problem is basically with the NIO connector that jetty uses. Others have suggested that using the old BIO connector is easy way to solve this. The problem is that the Jetty documentation solution requires using a hardcoded location for the new webdefaults.xml. We don’t want to make changes to the source of our project, just because we are on Windows.

My suggested solution is to just edit the maven jetty plugin to not use the FileMappedBuffer. Edit the webdefault.xml that is found in the jar file and that’s all. Browse to your maven repo location. e.g. for 6.1.26 of the plugin go here – USERHOME\.m2\repository\org\mortbay\jetty\jetty\6.1.26\jetty-6.1.26.jar . Open the jar file and find the file at org/mortbay/jetty/webapp/webdefault.xml , and edit the file:

<init-param>
<param-name>useFileMappedBuffer</param-name>
<param-value>false</param-value> <!—change from true to false -->
</init-param>

Save the file and replace the jar with this file. Now whenever you run jetty:run, it will use this file and you will not find windows locking the files when jetty deploys it

Thursday, March 21, 2013

Shout or Leave? - Open-source community governance

I’ve often thought that open-source contributions are towards “social good”, but I also realize it is a fairly na├»ve way to look at the open-source world. I was listening to a friend’s frustration of getting people to work together. She is a social worker and now in a political party is trying to make people work together to do “social good”. Participating closely in 3 fairly large open-source communities and following a few others closely, she asked me how I saw it works in the world of open-source. That’s where I thought it might be good to post my thoughts.

Open-source in its literal definition is just putting your code out. Doesn’t mean anything more. Thoughequality-vs-justice we have associated a few implicit connotations with the concept. Particularly, that there is an open, bazaar-like mode of working, which can be thought of as similar to the concept of democracy. But as we can see from the political conditions in different parts of the world, democracy isn’t one single thing. It is indeed a group of people working together towards a common goal, ideally each person having an equal weight of vote. But as the world is not idealistic, the more pragmatic meritocracy is acceptable. The open-source world looks at meritocracy through a number of aspects like code contributions, advocacy, documentations etc. with the general focus being towards getting work done. Yet, most research and discussion around open-source misses out on the aspect of power, tradition and culture of the communities that political scientists and sociologists have talked about for a long time. Open-source communities like other human networks have a vision of meritocracy and sometimes evangelize this vision, but often find it hard to practice.

Some open-source communities do have a BDFL, while others generally play by the resources rule. Resources include money, people, ideas and the group that possess these are generally considered more powerful. Some companies because of their “cool” products automatically make “cool” suggestions to the community and their work is “cooler” than the average contributor’s work. Because a developer works for a “cool” company, does not necessarily mean that every developer from that company has better skills than your average contributor. Some open-source communities value context-of-use, while others value “de-contextualization”. Many researchers have highlighted that domain-specific open-source software communities are better suited by being contextual. While, this challenge of being contextual and translating the contextual knowledge to a de-contextual developer, is also well studied, it is really not well enacted in domain-specific open-source community governance. Governance relates to decisions that define expectations, grant power, or verify performance. It consists of either a separate process or part of decision-making or leadership processes. Thus, when the next time you read about OSS 2.0, realize that governance plays a vital role in the challenge of domain-specific open-source communities.

Open-source communities are typically expected to work around an open-source license, some code of conduct pages and roles of developers. These alone, as we see from functioning democracies is fairly inadequate – judiciary, legislature, and executive. Media is often considered the 4th pillar of democracy. A vehicle that allows voices to reflect on how the other 3-pillars are doing. Good governance often comes from the fact that reflective voices are heard, understood and acted upon.

Yet, power plays an important role in sustainability or growth of a community. As an independent contributor (just as a citizen in democracy), one can either look at the power play, raise voice so that others see it or get fed up and leave.

Wednesday, March 6, 2013

You Aint Virtualized Till You’ve Used Archipel

I’ve setup a few virtualized environments starting from the good old Xen in 2004. Good web-based, remote management of the VMs has been a sore point for me, since you needed to have some Gtk or Qt app to do all the VM management stuff. Not that the desktop virtual machine management isn’t robust, but its just that when you are travelling and you just want to restart the VM quickly, a web interface does the work quickly.

Another thing about VM management is being able to look at resources in real-time use. There are people out there who love the command-line stuff, but I like a GUI for real-time resource management. Are there too many simultaneous users, high-latency requests, reporting occupying too much CPU? So SSH into a server through the command-line just doesn’t cut it for me.

I recently discovered the Archipel project, when trying to setup a virtualized environment for an NGO without system admin, who don’t need to know qemu, libvirt etc. The goal is that in a few clicks you’d have a virtual machine ready to be used. Another click to restart a VM. Another click to clone an existing VM. Increase or decrease VM memory or CPU cores etc. by moving some sliders. Isn’t that what linode or Amazon EC2 offers you ask?… But I have my own server in a local datacenter, which turns out to be much more ROI-effective and performance effective than those providers in the long-term.

Archipel does all of the above and much more. It is an excellent XMPP-based VM orchestration tool:

Archipel is a solution to manage and supervise virtual machines. No matter if you have a few locally on your computer or thousands through data centers, Archipel is a central solution to manage them all. You can perform all basic virtualization commands and many other things like live migration, VMCasts, packages, etc.

All you have to do is setup eJabberd-based XMPP server, make some configuration like the qemu host and it will find all the VMs from your list. You can even manage multiple hosts with multiple VMs from one eJabberd server. That’s not all. Most of the commands are like chatting to a bot and then it runs commands on libvirt. How cool is that?!? Being able to chat with your Hypervisor!!

On the client-side, you have to install a set of webpages on Apache and this can be on the same host as eJabberd or separate. The pages on this client-side app uses websockets or BOSH and has a nice looking UI. This allows real-time view of the virtual machines and the hosts. I also some the built-in VNC client that uses only JavaScript. So you do not have to install any client on the local machine. It all runs from the web browser. There is some lag, but if you’ve got a good machine and a browser with good internet connection, it works quite well.

There are some bugs in the client app that keep showing up, but all in all this is an excellent system. Virtual machine management cannot be easier than this… This is indeed the future of virtual machine orchestration.

Saturday, March 2, 2013

Alter Table for column with Foreign key in MySQL 5.6 Fails

Oracle released the much awaited MySQL 5.6 GA on 5th Feb, 2013. Much to everyone’s surprise and mysqlchanging direction in some sense, lots of improvements were made available in the Community release of MySQL, which were expected to be only part of the Enterprise Edition only.

Eager to try out the new NoSQL and performance improvements in 5.6, I downloaded the new installer. It is a packaged installer than unpacks and installs connectors, workbench and few other things along with the MySQL 5.6 Server. A surprising place where I got stuck was trying to install OpenMRS. The liquibase changeset uses <modifyType> tag and attempts to change the varchar column size. This works well under MySQL 5.5, but fails in 5.6.

While I’ve tried searching for this change in the release notes, what’s new and few other places, I haven’t found this mentioned clearly for the MySQL 5.6 release. The problem is that earlier you could disable the foreign key constraints check, modify the columns that have the constraints and re-enable the foreign key checks. If you changed the columns on both ends fine, things would just work well. But in 5.6 it seems there has been a change to this and the only mention I’ve found is new error messages that the server can throw. There is probably some tighten of things around the constraints management, but I couldn’t find much.

Here are the server error messages from MySQL 5.6 and MySQL 5.5:

http://dev.mysql.com/doc/refman/5.6/en/error-messages-server.html#error_er_fk_column_cannot_change
which wasn't there in:
http://dev.mysql.com/doc/refman/5.5/en/error-messages-server.html

Thursday, February 14, 2013

Opera to use Webkit engine

Update: Brendan Eich made an interesting post about the fighting the monoculture and the web needs diversity. But I feel Gecko needs to innovate faster to remain useful to the web. Servo is coming too late and platform acceleration is moving slowly. NPAPI/PPAPI is too slow etc. etc.

The news that Opera will be abandoning its Presto engine and moving to Webkit isn’t so much about shock, but is more disappointment for me. I have been a user of Opera for at least a decade now. Although I use Chrome and Firefox for a many things, Opera has remained installed and upgraded because every new release has something innovative in it.

Presto is a nice, lightweight rendering engine, where even with 50+ tabs open, the browser continues to work smooth. Pages scroll fine and they all the tabs open up quickly when you restart the browser. I have a habit of keeping tabs open for pages that I need to go to. Bookmarks don’t just cut it for me. An open tab to me is a reminder of what needs to be done. With Chrome, Firefox or Safari, staying with many tabs is a pain. Crashes are common with those browsers and the system memory usage is somewhat exponential. I don’t know how much of that can be attributed the the layout engine, but Opera does handle it with ease. All Opera users know this and they probably feel a disappointed that future versions of Opera might not be the same.

In some sense everyone agrees to the dominant Webkit position. More so as the world moves to mobile devices, Webkit is the standard layout engine from iPhone, Android to Blackberry. What differentiates the 300 million Opera users to continue using it will be interesting to watch. I’m probably not upgrading Opera to the next release, but if I really wanted to use something I’ve grown up with, they say IE10 also grew up!!

Wednesday, January 30, 2013

CFP: Theory-driven Interventions in Health care using Health Information Systems

Calls for Papers (special): International Journal of User-Driven Healthcare (IJUDH)
Special Issue On: Theory-driven Interventions in Health care using Health Information Systems

Submission Due Date
2/1/2013 (Extended to 1st March, 2013)

Guest Editors
Saptarshi Purkayastha, Norwegian University of Science and Technology, Norway
Knut Staring, University of Oslo, Norway

Introduction
Theory-driven evaluation came to prominence only a few decades ago with the appearance of Chen’s 1990 book Theory-Driven Evaluations. Since that time, the approach has attracted many supporters as well as detractors. At its core, theory-driven evaluation has two vital components, one conceptual, one empirical. Conceptually, theory-driven evaluations should explicate a program theory or model. Empirically, theory-driven evaluations seek to investigate how programs cause intended or observed outcomes.
Yet, limiting theory to evaluations is somewhat futile, because usually some theory as basis for a “hypothesis” (unless the research is using a grounded approach) is what drives interventions in the first place. For instance, some health information system (HIS) interventions aim to provide information about health system practices towards meeting the Millennium Development Goals (MDGs). A great number of theoretical lenses drive Information Systems (IS) interventions, and there have been attempts at collecting overviews of such theories, e.g., http://istheory.byu.edu. However, even though that list is quite comprehensive, it is not exhaustive – for example, it leaves out important perspectives from design science and information infrastructure theory.
In this special issue we seek to showcase papers that are driven by theory – in planning, in action, in diagnosis and in evaluations. Theory-driven interventions is used here to distinguish from report-style papers, position papers or papers that draw concepts purely from observations without theoretical basis prior to intervention.

Objective
The special issue would like to highlight studies in HIS that focus on doing IS interventions with a theory in mind or with knowledge building/testing in mind. The studies in the special issue would like to explain the phenomenon of IS intervention through IS theory, yet allow medical researchers/practitioners to connect with them. These studies will help medical informaticians or public health practitioners to realize the importance of existing abstracted knowledge (theory) and consider appropriate theoretical lenses for HIS interventions.

Recommended Topics
Suggested topics for discussion include (but are not limited to) the following:
- Participatory action-research as a bottom up strategy to problem solving and achieving change in healthcare
- Distinguishing end-users from super-users and theorizing their views in HIS
- Institutionalization of IS within healthcare practices
- Design science perspectives on HIS
- Interventions that deal with structures in health systems and their evolution
- Efforts at scaling interventions and information infrastructure
- Quantity of knowledge absorption, quantity of knowledge transfer, innovation in HIS
- User satisfaction, performance, perception, behaviour, usage as in Cognitive dissonance theory
- Dynamics of social construction and performance of illness through user-driven healthcare practices
- Capabilities, absorptive capacity, environmental turbulence, agility as in Dynamic Capabilities Theory
- Resource Importance, Alternatives, Discretion as in Resource Dependency Theory
- Speech acts, Communicative action as in Language Action Perspectives when HIS systems capture patient narratives or clinician notes or communication in health systems
- Fit-Viability Model of IS interventions on Health systems
- Bridging the gap between what we know and what is knowable in clinical practice

Submission Procedure
Researchers and practitioners are invited to submit papers (over email to the guest editors) for this special theme issue on or before March 1, 2013. All submissions must be original and should not be under review by another publication. Interested authors should consult the journal’s guidelines for the manuscript submissions at: http://www.igi-global.com/Files/AuthorEditor/guidelinessubmission.pdf. Submitted papers should not be more than 8000 words inclusive of abstract, tables and references. All submitted papers will be reviewed by 2 reviewers on a double-blind basis. Papers must follow APA style for reference citations.

We also request interested authors to send an abstract as soon as possible for discussion.
All submissions and inquiries should be directed to the attention of:

Saptarshi Purkayastha Knut Staring
Norwegian University of Science & Technology University of Oslo, Norway
E-mail: saptarsp (at) idi<dot>ntnu.no E-mail: knutst (at) ifi<dot>uio.no

Monday, January 28, 2013

Try Netbeans 7.3 RC1

The Netbeans 7.3 RC1 is out for everyone to try. After a lot of hardwork from the Netbeans developers and testing and feedback from the NetCAT community, the latest release of Netbeans is out… for the larger community to accept.

Download from here: http://bits.netbeans.org/netbeans/7.3/rc1/

As has been the tradition, the community will decide if the release is good enough through the Community Acceptance Survey. You’ll need the netbeans.org account to complete the survey. But your feedback is invaluable.

Tuesday, October 2, 2012

Why VoLTE (Voice over LTE) might take really long

There was an interesting article today at the Reg explaining VoLTE (Voice over LTE – Long Term4G_LTE_Logo Evolution or 4G LTE) in their WTF series of articles. I’ve been following this interesting phenomenon that over the last few months many telecom operators are rolling-out 4G networks, but not fully utilising the features that 4G networks bring for Voice-Over-IP (VOIP).

So to give a bit of background on LTE and its advantages that I’m talking about. LTE being a purely IP network has the advantages of managing just one kind of data packet. This means that we can build tools around only managing IP and data packets. So all your gateways and routers could be optimized to lessons that have been learnt from using internet firewalls, messaging routers and what have you… Voice is also sent as data packets instead of having a different frequency for voice. GSMA puts some advantages to VoLTE:

  1. Single implementation promotes scale
  2. Single implementation reduces complexity
  3. Single implementation enables Roaming

But as you’ll see, its not as simple to move an existing infrastructure to another one. We’ve seen this in the case of IPv6 as well. Information Infrastructure  theory discusses in length the challenges of evolving/moving infrastructures. They don’t happen overnight, nor is there an obvious/expected path to how it’ll evolve! A couple of months back we heard Verizon’s deployment of 4G LTE, “there’s no rush for VoLTE”. If not using VoLTE, one can re-route the voice calls to the old-style Circuit-Switched (CS) network. But this handover between the network creates a lag of 3-4 secs, and my guess is that on heavy traffic could take longer. Investing in this handover might be another headache for the telecom operator, but that seems to be the path most of the operators are taking. The Reg article asks an important question “Are phone users - most of the population these days, though rather fewer of them will be 4G early adopters - going to put up with the pre-call lag? Will they accept a lesser experience than they're used to?”. An important question and the answer could be that operators will use 4G for data services and 3G connection for voice. So they’ll not move to 4G completely and still have their 3G equipment.

There is path dependency that the operators will take and this will result in much slower adoption of VoLTE. So while the operators think it might be fine to make the users wait a few seconds to get on the call, I don’t think users will appreciate that… I hate listening to some stupid beeps or never connecting the call on CDMA networks in India when making an urgent call!! We want 4G for the data speeds, but we decrease usability through the route which operators are taking. Why not get into VoIP apps through which calls can be made? Monetize that instead? Forget the fear that you’ll lose money from voice calls. Increasing volumes on data services will help. As an operator, you can always bundle cheap calls when using that VoIP app that you install as part of the SIM card.

Friday, September 28, 2012

Insourcing for Development – A Networks of Action Approach to GSD

When presenting “The Research Agenda for IT impact sourcing”, Heeks places Ethical outsourcing and Social outsourcing within the scope of what can be broadly referred as the use of outsourcing for development. In the BoP (Business Operation Process) outsourcing continuum he differentiates these from Exploitative Outsourcing and Commercial Outsourcing. You can read about these terms in depth on his blogs. The focus of Social Outsourcing is on contracting out goods and services into social enterprises. In their paper, Heeks & Arun (2010) highlight that social outsourcing has the potential to deliver development benefits to marginalized groups.

In the IT impact sourcing model the idea is to create sustainable jobs in communities where opportunities are low and can act as income improvements. While this is a useful and more common way to look at developmental impacts of outsourcing, there is another way to use IT for Development. And that is through the view of offshore insourcing.

Global Software Development (GSD) is a fairly common practice in large software projects. As an arbitrage in globalized markets, it is more common practice to contracting with a wholly owned subsidiary located in another country. This is offshore insourcing. While insourcing in itself might be offshore or in-country and there has been a recent push at GM towards insourcing, offshore insourcing has many advantages that are seldom described in research. This is what I’d like to add to the research agenda for IT impact sourcing.

Titlestad, Staring and Braa (2009) highlight how the design of health information in the global south has been co-ordinated in a GSD fashion. The project’s core development on what is referred to as global release happens at the Univ of Oslo, Norway, but the requirements for this comes from different local teams based in the “global south”, as can be summarized from their paper below:

GSD

Since this is an open-source project (DHIS2), the idea of wholly-owned subsidiary might be unconventional to think about, but being part of the same global research network (HISP), each of the local nodes actually act similar to what would happen in a large global software corporation. Thus, the local software requirements, design and use happens in different countries in the “global south”, but most of the global/generic software development happens in Norway. This type of offshore insourcing is done to keep “generativity” (Gizaw, 2013) of the software intact, such that it can be “ready-for-customization” and be “flexible”, which does not have the features that are specific to a country implementation. This allows new implementers or new country to use DHIS2 without any costs to software development, although only customizations to the context need to be done. Even then, the generic features are available at a much lower cost (following the principles of libre software) for the new implementations than it would, if they were to develop the features from scratch.

How the GSD model has contributed to developmental impacts in many countries in “global south” has been discussed through many research articles. Staring & Titlestad (2008) describe the global software development and commons-based peer production of DHIS2. Through practical examples of the project they discuss the software development practices that are aimed at improving public health sector in the south. Many other researchers in the HISP network over the years have shown the developmental impacts resulting from the project and its use in developing countries. This action-research approach of the HISP network has been referred to as “Networks of Action”, where the method of action research has been shown to have sustainable developmental impact in the “global south”. Combining these concepts of Insourcing for the purpose of development using the Networks of Action approach is what should become part of the IT impact sourcing research agenda.

This blog post is just to introduce the idea that instead of focusing on outsourcing alone, “IT impact sourcing” can also cover insourcing and development through insourcing. Much more detailed analysis and discussion on this will be part of an upcoming research paper.