Friday, June 27, 2008

AMD Brings Back All-In-Wonder

AMD has released a press announcement that it Expands The Ultimate Visual Experience™ combining exceptional HD Graphics, HDTV, and HD Video for PCs. This ATI-AIW  actually means that AMD is going to bring back the good ‘ol TV Tuner + graphics card that ATI used to sell under the All-In-Wonder line of products. The new ATI All-In-Wonder card is a PCI-Express 2.0, has the HD3650 GPU and Theater 650 Pro TV Tuner.

After AMD had stopped selling these products, with the last launch in Jan 2006, it does come as a surprise to a lot of people. The big advertisement with this launch is that AMD is making here is that it is full HD and can record in full HD. It can playback BlueRay and HD content at 1080p.

From the press release:

ATI All-in-Wonder HD prepares you for brilliant TV, sharp images and smooth playback on a wide variety of HDTVs and displays. With support for Microsoft DirectX® 10.1, gamers can play the top HD games with life-like 3D graphics, stunning realism, and great shading effects. Full support for PCI Express 2.0 technology allows for twice the throughput of current PCI Express 1.0 cards, which means gamers will be ready for demanding graphic applications.

The price of the product is going to be around $199 and sounds like a really good deal and value-for-money. The full spec-sheet can be found here. The plans are to hit the streets by July end.

Thursday, June 26, 2008

Barcode Fun With OpenMRS

The Registration Module is supposed to generate Barcode images that will be printed on stickers and given to patients on their ID cards. This will help easier logo  identification of patients and quicker patient registration. For this purpose, we use the “Patient Identifier” to create barcodes. The “Patient Identifier” is hopefully unique and will help create bug-free barcodes.

Barcodes have a lot of different standards like Code39, Code128, UPC-A, UPC-E etc. Each of these standards were designed for a specific industry and application. And the Registration Module should support different standards so that it can be easy for the implementers of OpenMRS to choose any standard that they want. Thus, began my journey to find a way to generate customizable and easy barcodes.

Having just finished with the commit for the barcode generation in my registration module, I am extremely happy to say that Barcode Generation through the Registration Module is very easy. Still have to implement a print dialog box, but then the way to generate barcode is done through a pre-built servlet... I used an open-source barcode library called Barcode4j, which can generate barcodes in a variety of standards as well as image formats.

Barcode4j is an excellent library and after comparing about 15 different barcode libraries, I found it to be the most easy-to-use and extensible. Barcode4j already provides a Java Servlet which needs to be passed different parameters and it generates the barcode image “just-like-that”. The only thing I had to do was create a mapping for the servlet in the modules “config.xml”.

And that was it... Those nice black lines were shown on screen beside the patient search results. Now I needed to see if the barcodes are accurate and working. I took a printout of the image and scanned it through the barcode scanner. Hurray!! It worked!! And thus I realized that the servlet was accurately creating barcodes. Now moving onto creating a nice AJAX print dialog and probably some useful UI for creating identity cards.

Monday, June 23, 2008

Niagara 3 Does 256-threads, But No Supercomputing ?

The Register claims to have learnt that the next version of Niagara processors from Sun Microsystems is going to be a 16-core and 16-thread/core processor. Sun-logo This means that the Niagara 3 will be doing 256 threads simultaneously. And we aren’t talking of some distant future plan, but Sun Microsystems will be releasing these processors in 2009.

If you did not know, Sun Microsystems has been selling the Niagara 2 processors which can run 64 threads simultaneously on 8-core processor. Niagara 2 is officially sold as UltraSPARC T2+ chips and has been the most successful processors from Sun’s lineup in recent years. Obviously with so many threads running simultaneously on a single chip means that the performance is screaming and is excellent value for money. There are 2 socket systems available as well as 4-socket systems with UltraSPARC T2+. Then there is also the Rock processors coming in 2009.

From what I heard 2 years back, UltraSPARC had the largest processor design team in the world. So it comes as no surprise that it has the best design and performance in multithreaded systems. They also open-sourced their older SPARC designs as part of OpenSPARC.net and have pledged to continue opensourcing the next-gen processors as well!!

But the sad thing is that UltraSPARC Tx processors haven’t been as big a hit in the HPC and supercomputer market. If you have a look at the list of top supercomputers for June 2008, you will realize that Sun Microsystems just figures once on the top 10. It features in the top 10 for Texas Advanced Computing Center at University of Texas and uses AMD’s Opteron processors and not UltraSPARC. In all of the 500 supercomputers, IBM rules the roost with 209 systems and HP comes second with 183 systems. IBM has been able to push POWER processors to build some awesome supercomputers whereas HP has taken Xeon and built powerful systems around it. Sun Microsystems on the other hand has just 4 supercomputer systems and all are with Opterons.

Most people would argue that single core performance proves to be more effective than multi-threaded performance in conventional supercomputer software and benchmarks. Processes are more common units of work than threads itself, but that’s where the overall system building plays its part. Like Seymour Cray has stated, “Anyone can build a fast CPU. The trick is to build a fast system.”

Review: openSuSE 11.0

On the 19th of June 2008, openSuSE 11.0 was released and I was very excited about the new release because my experience with openSuSE 10.3 has been very good and I have been following the development of openSuSE 11.0 closely. In the meantime, I have tried Ubuntu 8.04, Kubuntu, Fedora 9, openSolaris 2008.05, but somehow I’ve been coming back to openSuSE 10.3 because of some or the other nagging problem with the other distros...

Download

I downloaded openSuSE 11.0 the moment it was released and I have to say that the release was very professionally co-ordinated. There were launch events all around the globe where people received their openSUSE 11.0 DVDs and with the counter running all the time, everyone knew when to get their download managers ready. The mirrors were fast and the torrents seemed to have enough seeders. I finished the 4.3GB DVD iso by 20th morning (IST) in just about 4hrs time. There is also a single Live CD KDE iso, GNOME iso as well as a MiniCD (71 MB) for Network installation...

Installation: Image-based Deployment and Sleek

The openSuSE site has a nice installation guide with screenshots and it doesn’t make sense for me to go through the same thing again. But two things are special in openSuSE 11.0 that are worth mention. The first is that they have a gorgeous installation GUI, the best looking installation for any operating system ever!! Its easy to install and intuitive. The second the use of image deployment for the installation of GNOME. This really speeds up the installation if you are just using the basic GNOME-based setup. I generally prefer KDE, but for the test I installed the GNOME and it was fast... really really fast! I was shown the GNOME desktop with all the preferred software installed in straight 15 minutes. That’s faster than any other distro that I’ve ever installed. It was an amazing experience to see such a fast installation!

SuSE11-ImageDeploy

Like previous version, openSuSE 11.0 comes with a variety of useful non-opensource software like flashplayer, java 1.6.0_u6, fonts, Adobe reader 8, etc. Along with these I also installed Jdk6 update 10 (the awesome new Java Plugin), Mono, Netbeans 6.1, GlassFish for my OpenMRS performance test... KDE 4.0 is also there as a separate choice of GUI when installing along with KDE 3.5.9, GNOME 2.22. Since I have never been able to stably run KDE 4.0 and have always switched back to KDE 3.5, I thought I’d try KDE 4.0 in openSUSE 11.0

SuSE11-KDE4

I was pleasantly surprised that KDE 4.0 “just worked”. I had my first KDE 4.0 crash after 1.5 hrs of use whereas earlier it was before 20 min that the SigEnv or Segmentation Fault would throw up. I still didn’t want any crashes and hence I’m back to using KDE 3.5.9. But KDE 4 is really coming good!

As soon as I finished installing, everyone at home wanted me to record the Euro 2008 matches and soon I needed VLC to be installed. I went to videolan.org/vlc and clicked on the SuSE link... and I was greeted with a 1-Click Install button.

SuSE11-VLC-1Click

This was one of the really awesome openSUSE things that was first brought in openSUSE 10.3 and has been improved in openSuSE 11.0. I clicked on it and the installation was finished really quickly.

Improved Installation with YaST

That’s when I realized the most important update to openSuSE 11.0 which is the improved speed of YaST. No other distro has such an easy administration tool where nearly everything can be administered. And in openSuSE 11.0 everything in the YaST module just works. RPM installation is fast and adding community repositories is easy. I am a big fan of apt-get in Ubuntu, but openSUSE 11.0 software installation is just as easy now...

SuSE11-CommunityRepo2 SuSE11-CommunityRepo1

Every piece of hardware worked

I have lots of hardware, old and new on which I often install and test different distros, Windows, OSx86 etc. openSuSE worked with every bit of hardware that was thrown at it out of the box. Every distro struggled with the UMTS 3G card on a laptop, but surprisingly openSuSE 11.0 made it work. Few other distros had trouble with the legacy Nvidia Quadro GoGL card on another laptop, but openSuSE 11.0 worked... Old printers, USB devices, Firewire everything worked. Even the Barcode Reader with PS2-USB converter worked on the USB port which wouldn’t work on Ubuntu 8.04 or other newer distros.

The only change was that on my desktop Intel DG965RY board the surround sound wasn’t working. I followed the Audio Troubleshooting doc, added the model=dell-3stack and all my speakers started trumpeting!

Compiz-Fusion and the Bling

Last time I was not happy with the stability of Compiz-Fusion on openSuSE 10.3. For Ubuntu 8.04, Compiz-Fusion worked well and so I knew it was something to do with the new kernel module driver on my system. With openSuSE 11.0, Compiz-Fusion works perfectly and is able to show all its features. A nice little configuration screen helps manage the amount of effects that you wish to enable. I personally don’t enable effects, but its a good show-off to make people standup and appreciate open-source beauty.

SuSE11-Compiz SuSE11-Sphere

Other features and improvements

  • Linux kernel 2.6.25
  • glibc 2.8
  • GCC 4.3
  • 200 other new features

Conclusion

You can’t miss the ease of use and the sleek looks that openSuSE 11.0 brings to the desktop. Its the perfect distro for a new user coming to linux. For the old pros, openSUSE 11.0 is fast and brings in ease of administration and software installation. Novell support is pretty good for big organizations that can buy a boxed product from them. Xen is my favorite for virtualization and it has good integration and management in YaST. But the strength and momentum of openSUSE is definitely in the desktop space. Earlier, openSuSE lacked the community backing that Ubuntu has generated in a short timespan, but with new initiatives and better responses at openSuSE forums, the openSuSE community and grown leaps and bounds. openSuSE 11.0 has grown from strength to strength and is one of the best ways to give competition to Windows on the desktop!

Other screenshots

gnome-desktop The GNOME Desktop KDE-desktop The KDE Desktop
Cube-atlantis Compiz-Fusion Cube Atlantis Plugin Animation-burn Compiz-Fusion Burn Animation

Tuesday, June 17, 2008

Windows Beats Linux/OSX at Handwriting

Lately I’ve been trying to code an application which requires some form of natural handwriting recognition. Natural Handwriting is basically the way you write on your Tablet/Pocket PC/Palm/Mobile, either using a stylus on your touch-screen or on the desktop using mouse. And after looking at different options available for handwriting recognition, Windows beats every other operating system including Linux and Mac OSX hands down!!

For the last month or so I’ve tried every possible handwriting recognition software and API out there. Paid, free and open-source API or Programs for Windows, Linux and OSX. Out of all the ones I tried, only one seemed to be perfectly working and easy to develop upon and its from Microsoft. Microsoft has this ultra-amazing API called InkAnalysis, but obviously and sadly for me, works only on Windows. And sad because the application for which I’m developing this handwriting recognition module is a popular open-source kids software and mainly targeted towards Linux distros.

InkAnalysis is a powerful API that performs 2 complementary activities of handwriting recognition and layout classification. InkAnalysis API has very good Interfaces for detecting the layout of any  given document and based on the understanding of the layout it performs handwriting recognition. Even without training it amazingly detects a wide variety of handwriting that even I had a hard time reading as a professor when checking my students’ answer sheets. The API was earlier only available as part of the Windows Vista SDK, but Microsoft has also released the API through the Tablet PC Platform SDK. This means applications that use InkAnalysis API can be run on not only Windows Vista but also Windows XP SP2, SP3 and the future... And the best showcase of the accuracy and strength of the API is through the built-in Windows Vista app called Tablet PC Input (check below screenshot)

TabletInput 

During the trials of all the different APIs and programs, I had high expectations of finding something from Apple that “just works”. With the iPhone they seemed to me like the “touch” geniuses and handwriting would definitely be in touch category. But alas! OSX InkWell isn’t good enough. It makes too many mistakes and correcting them doesn’t work very well. Training or no training it had hard time recognizing natural cursive handwriting. Even the API was pretty complex to use and the didn’t have enough explanation.

OSX

Finally I came back to where I started... Looking for a API that would work with Linux, preferably was open-source licensed. I tried a few and none were close to useable. HRE from Sun was archiac and complex, Tomoe is for Japanese and Chinese script only, LipiTK from HP Labs doesn’t work like advertised and support won’t reply back to emails. Google’s revived Tesseract is only a OCR-engine, good for typed text and not handwriting. OCRopus is also work-in-progress and not working at the moment. With all my hopes down on the project, I finally reached CellWriter. CellWriter seems to be a good working handwriting recognition, but doesn’t work as modestly as InkWell or perfectly as InkAnalysis. CellWriter uses a cell-based, single-character recognizer. It requires training without which it can’t understand much, but once trained is pretty accurate at character recognition. But I tried applying it to multiple cells and it doesn’t quite work. May be a good character recognizer, but not handwriting recognition.

I thus concluded Microsoft is way ahead of the competition on handwriting recognition. Handwritten input system is useful for a lot of people and developers from the open-source community have yet to realize this fact. After all the Vista rant is popular on the web, someone should have seen its good side!!

Intel Invests in Solar Cells

With the rising fuel costs and the dangers of global warming by the use of carbon fuels, everyone seems to be looking at alternate sources of energy. The last week has been  specially interesting because 3 major tech companies, specializing in semiconductor manufacturing have got into manufacturing solar cells. Intel today announced that its spin-off SpectraWatt will be manufacturing solar cells for panel makers starting in the middle of next year.

Just last week, IBM had made similar announcement of manufacturing solar cells with Japanese semiconductor firm Tokyo Ohka Kogyo. Earlier to that HP said that it has licensed some technology to a solar panel startup called Xtreme Energetics. All these companies are putting their expertise in semiconductor manufacturing to improve solar cell efficiencies and decrease the cost of manufacturing these cells.

Intel Capital has put some $50 million into SpectraWatt and Cogentrix Energy, PCG Clean Energy and Technology Fund and Solon AG are also investing into Intel’s latest spin-off. SpectraWatt is going to manufacture these photo-voltaic cells at a fabrication plant in Oregon. After the Intel’s plant at Oregon is fully-functional it plans to churn out 60 megawatts worth of cells sometime in Q2-2009.

Hopefully all this means that cheaper solar panels can be used in homes and sufficient power can be produced at a good value-for-money. Today the cost of solar panels are way too high and produce too little power to make them practically replace normal power lines.

Monday, June 16, 2008

Improving Java Web Performance With C/C++

From the very first day that I had been working on OpenMRS, I felt that OpenMRS ran a little slower than I expected. Probably the old OpenMRS demo server openmrs_logo adds to the slowness. Later, when we were discussing about how Hibernate sessions should be implemented in OpenMRS and Java Web Apps in general, I was again brought to think about OpenMRS performance.

Since OpenMRS community generally implements on Tomcat, my main aim was to improve performance of the servlet container. One simple way to improve performance, which I had heard of earlier was the use of Apache has the “Apache Portable Runtime” (APR) project with Native Libraries. The APR uses native libraries with JNI to improve the server performance on a specific platform. In short, Tomcat is given some local OS steroids and currently works on Windows and POSIX-based systems.

The APR library is somewhat an irony for 2 main reasons:

  • I’ve heard this argument that Tomcat runs faster than Apache in some benchmarks. These guys argue that Java is faster than C/C++ and hence Tomcat wins.
  • On the other hand, APR and Native Tomcat uses JNI code written in C/C++ to improve performance.

Either ways, I think generalizing the above statements isn’t correct and hence I went forward to see if APR does improve performance of our web application. I used Windows Vista and Tomcat 6.0.16 for the test and Windows is probably what most OpenMRS implementations use. You can download the native binaries for Windows from here & APR from here.  Add the extracted files to Path and place the tcnative-1.dll in APR’s bin folder.

And the first thing I observed tomcat started little faster and even OpenMRS initialized slightly faster.

 
Before
After
OpenMRS initialization 192ms 183ms
Tomcat Server Startup 12892ms 11449ms

But startup improvement is not all. We want to check how good the application is performing and Apache Benchmark (ab) is a good way to test static content, but isn’t very good at dynamic content... I wanted to use Faban after I remembered Scott Oak’s writeup from last year, but couldn’t find enough time for the testing with Faban...

Instead, I used JMeter which is a nice generalized test that replicates how a user interacts with the web application. You can send POST requests with parameters and also simulate your test plan, just like a normal web user would use your application. Here are some of the results on different OpenMRS pages with 10 concurrent requests and average of 3 runs on my dual core server:

 
Without APR
With APR
OpenMRS homepage 225.7ms/request 185.7ms/request
User Login 1464.2ms/request 1185.3ms/request
Find patient 95ms/request 80ms/request
Patient dashboard 2887.6ms/request 1984.3ms/request

My first observation was that the first run on the test completely sucks. The later runs improve performance drastically. This is because of tomcat 6 has good caching mechanism and was shown with or without APR. Another thing I observed was that beyond 500 concurrent users the application was crying and tomcat was hanging up. APR or no APR didn’t matter much... I’ve yet to analyze why it wouldn’t scale any further, but must be something related to Hibernate sessions. May be some experienced developer can look into these figures, perform some more specific benchmarks and improve scalability.

Saturday, June 14, 2008

Mac Mini Will Be Dead Soon

If you have that cute-looking, small box from Apple that fits into any monitor called “Mac Mini”, then soon you may be the owner of a discontinued product.mac-mini Rumours has it again that Apple has plans to kill the Mac Mini and replace it with something else. The death of Mac Mini has been predicted lot of times, but call it Steve Jobs insistence on a living-room computer or Apple’s love for small, Mac Mini has lived on...

But this time it could be little different... Apple has notified its partners that Mac mini will be soon reaching EOL (End-of-Life) and a newer product, possibly an addition to Apple TV will be launched. Mac Mini is the cheapest Mac available and it has been Steve Jobs favorite for long time. The “Cube” has lived on!! Apple has been working off late on consumer electronics that can be used in homes and living rooms along with TV. Apple TV surely hit that market, but isn’t as popular as one would want. Probably adding a computer and OSX to the Apple TV can help... It will be probably logical to have one product do it all

But the announcement from Apple about this product should come sooner rather than later. The Back-to-School season should be a good time to release the new product and help people make the switch to the cheapest Mac...

Friday, June 13, 2008

US Broadband Hit With Indian ISP Madbug

While reading this article about AT&T decision to introduce usage-based pricing, I couldn’t stop thinking if the American ISPs were hit by the same bug which Indian ISPs have been infected with. And its not just AT&T, previously Time Warner and then Comcast made similar claims that charging customer who use more bandwidth will help ease pressure on other customers. And for the record, we in India never know what “real” unlimited broadband means.

If you think like a communist, probably what the US ISPs are doing is correct. Obviously due to a lot of usage from some users, the network feels the pinch and congestion results in low speeds for a few users. You would think that these customers should be charged more because they are using more than others. But the deal in the first place suggests “unlimited” broadband access. You opted for a service that meant you can use the internet as much as you want. Why would you suffer if someone else is using less, its not your problem!!

We have had a similar issue in India with ISPs not offering such “unlimited” broadband services. There is always a data cap, above which more bucks have to put on the table depending on how much you’ve used. The point of such “caps” is to make users use less of the network. But then shouldn’t the ISPs improve the network first before getting more subscribers. Shouldn’t they first upgrade the networks, provide good service and then increase their charges. Wolfang Gruener (TGDaily) puts forth his experience with broadband in the US and his opinion on this usage-based pricing. I have to say, its the Indian ISP bug which has hit the American ISPs. I have received letters from my ISP (Hathway Broadband) twice earlier that I’ve been downloading too much on my 256Kbps unlimited broadband connection. How much more can you download on a connection that works at 150Kbps, when the advertised speed is 256Kbps??

AT&T today said, “a form of usage-based pricing for those customers who have abnormally high usage patterns is inevitable…” I had received nearly the same quote from my ISP. Comcast on the other hand said that it will give “delayed response times for Internet traffic only for those customers who are using more than their fair share of available Internet resources at the time.” Comcast then added that “most customers will notice little to no change in their Internet experience when the new network management technique is working”… Its interesting to note, how the ISPs are playing with words to say that they don’t want to provide good service.

India and US are big markets and both the countries have great software/services sector. India is growing into a big online market and online services and applications require good broadband connections. Democracies are good places where government can get in and listen to the customer needs… Hope some politician can medicate the ISPs from this madbug!!

Thursday, June 12, 2008

AMD Working on Intel’s Physics Havok

AMD and Intel are rarely to be found in friendly talk. But this time, AMD is in talks with Intel to integrate “Havok FX” physics in its ATI graphics chips which is AMD-Intel-Love meant to improve realism in games and physics calculations in scientific applications. Intel last year acquired physics technology company Havok, and AMD plans to use its API inside GPUs.

AMD, like Intel has plans to have integrated CPU+GPU processors, but plans to use Havok physics in discrete graphics. The main competitor in the physics processing technology is Ageia, which was acquired by Nvidia. With two different technologies for physics, AMD decision to use Havok has important implications for game developers as well as scientific application developers. Nvidia has been pushing Ageia’s Physx technology to game developers and AMD was starting to feel lost. Today’s discussion about AMD using Havok gives AMD an advantage and its called Intel marketing. Intel will be using Havok inside its CPU+GPU Larabee and developers will be magically tilted towards using Havok because of Intel’s large market share.

The official announcement about how much AMD has to give Intel for the technology is still not out, but surely good money will change hand. Physics will definitely play an important role in the future of GPUs, either in games or general purpose computing. Along with ray-tracing, Intel is betting a lot on physics for the future of advanced computing and HPC. AMD has lots of room to maneuver, since it can switch between GPUs and CPUs with good ease. Lets wait and see how everything falls into place!!

June 17th is Download Day for Firefox 3

Mozilla has announced that it will be releasing Firefox 3 on the 17th of June. Mozilla had earlier announced that it has planned to set a world record of theFF highest downloads in a day and had asked everyone to spread the word. The support pledge can be found here and with the release of RC2 last week, everything is stable and ready to be released.

Firefox 3 has a host of new features, improved performance and better security. Mozilla claims that Firefox 3 has 15,000 tweaks and enhancements compared to earlier version. The download day is important to the open-source world because it is like a show of strength against the corporate closed-source world. Firefox has been grabbing market share rapidly in the last few months and Firefox 3 hopes to improve its browser reach.

Get all set and download from “GetFirefox.com” when the download day arrives on June 17th.

Wednesday, June 11, 2008

Sun Microsystems Does Awesome Web Throttling

Web Downloads can be a really painful for users as well as companies when you have a sudden surge of downloaders or traffic suddenly increases exponentially.sun_logo Most Linux distros these days ask people to use the Bittorrent downloads as a savior to prevent bottleneck on downloads... And on similar grounds, Sun Microsystems seems to have some excellent throttling going on with the downloads from their servers.

If you have recently downloaded something off Sun's web servers, you must have observed this phenomenon. When you try to download something from Sun's servers, the speed increases initially. Then after sometime the speed falls and steadies to slower than the initial speed. If you happen to use a download manager, you can stop and start the download again. The speed will increase and then fallback again... And then when your download is about to finish, all of a sudden the speed increases and you finish the downloads faster than you thought!!

You are probably guessing this may have been a one time thing, but I have done close to 100 different downloads in the last 1 week from the Sun servers from different locations and ISPs and the same thing happened everytime. May be its something in the Solaris or Apache or Virtualization or something else... Nor was my observation scientific enough...

Have you observed something similar?? Is Sun working on some load balancing web server and experimenting with it?? Either ways, I think web downloads are good places to employ throttling. For companies, large downloads are particularly useful with such practices, but for the users Bittorrent is definitely the way to go!!

Monday, June 9, 2008

Second Week For OpenMRS Coding

Last week was really a hectic time for me and hence haven’t found much time to code or blog. A close friend Debojit is in a new Idol-like reality show called “Jo Jeeta Wohi Superstar” and we are back in the publicity campaign like we were when last time he won Saregamapa Challenge 2005. The good thing is that I’m still writing code (to game the online voting), but not exactly for OpenMRS… But I did some work on OpenMRS and nearly had a deliverable basic patient search.

The patient search on my Registration Module has taught me a few important lessons. OpenMRS’s web application uses the Model-View-Controller (MVC) through Spring Framework. I have a controller which calls some methods from the OpenMRS API and retrieves patient information. Normally, the practice is to return values from the Controller to the View (JSP here) is through the use of a bean’s getter methods. This means that the Controller sets the Bean object with the values from the database and the View (i.e. JSP) page uses to the getter methods to get values.

But instead of a bean, I tried to return a double-dimensional array and got stuck with the following... I’m still wondering why I can’t access length variable of the array. Look at the code snippets below and may be I’ll get some hints from you!!

RegistrationController:

protected String[][] formBackingObject(HttpServletRequest request) throws Exception {
        String[][] searchedPatients = new String[0][0];
        if (request.getParameter("phrase") != null) {
            List<Patient> patients = Context.getPatientService().getPatients(request.getParameter("phrase"));
            searchedPatients = new String[patients.size()][8];
            for (int i = 0; i < patients.size(); i++) {
                searchedPatients [i][0] = (patients.get(i)).getPatientIdentifier().getIdentifier();
                searchedPatients [i][1] = (patients.get(i)).getGivenName();
                searchedPatients [i][2] = (patients.get(i)).getMiddleName();
                searchedPatients [i][3] = (patients.get(i)).getFamilyName();
                searchedPatients [i][4] = String.valueOf((patients.get(i)).getAge());
                searchedPatients [i][5] = (patients.get(i)).getGender();
                searchedPatients [i][6] = (patients.get(i)).getTribe().getName();
                searchedPatients [i][7] = (patients.get(i)).getBirthdate().toString(); 
        } 
        log.info("# of patients found: "+searchedPatients.length); 
        return searchedPatients;           
        }
        return searchedPatients; 
}

This String[][] called searchedPatients can be accessed as registrationForm according to my moduleApplicationContext mapping. But in my JSP page when I try to access the .length variable of the registration form there seems to be a problem.

registrationForm.jsp

<c:forEach var="row" begin="0" end="${registrationForm.length}">
    <tr>
        <td>${registrationForm[row][0]}</td>
        <td>${registrationForm[row][1]}</td>
        <td>${registrationForm[row][2]}</td>
        <td>${registrationForm[row][3]}</td>
        <td>${registrationForm[row][4]}</td>
        <td>${registrationForm[row][5]}</td>
        <td>${registrationForm[row][6]}</td>
        <td>${registrationForm[row][7]}</td>
    </tr>
    </c:forEach>

I get an error where the JSP page throws a NumberFormatException for “length” input. Now I was baffled why it was trying to take “length” as input string, when it should have taken the length of the array as its input.

Anyways, with the Bean the current patient search is working, but I have to get back to coding quickly and build a good UI, for which I’m using jQuery. With jQuery I plan to implement AJAX and also some simple but useful UI improvements. Next in-line is searching using the barcode reader and a lot more left to do… Hopefully, I’ll do more!!

Friday, June 6, 2008

Opera Adds Feature to Block Malicious Websites

Opera has announced that it has made a deal with Haute Secure to add Fraud Protection to Opera 9.5. Haute Secure has a technology where it can prevent Opera malware from being installed or can warn the user that the web page being visited contains some kind of malware. The technology is similar to what Mozilla’s Firefox 2 had and has been extended to default in Firefox 3.

Available in Opera 9.5, the upcoming new version of Opera’s Web browser, Fraud Protection featuring HauteSecure technology prevents users from accidentally or unknowingly downloading rogue malware designed to steal personal data such as credit card numbers, passwords and other identity information. Jon von Tetzchner, CEO, Opera said, “Today we’ve extended that commitment beyond the browser to protect our users from malware that tries to attack when they visit a compromised Web page or that they may unintentionally try to download. Haute Secure’s prevention technology is reinventing Web-based threat detection and we look forward to working together in protecting our users all over the world.”

Firefox 2 had an option to use Google’s list of malware-infected or known malicious websites. With Firefox 3, its on by default. Both Firefox and Opera have realized that a lot of malicious content on the Internet is no longer passed through virus-infected executables or network-propagating worms. Although, these threats are dealt through Antivirus (AV) software, AVs work best after your computer has been infected. Instead, this approach of warning users, when visiting a malicious website is more useful. Credit-card information and phishing are the new ways of money making for hackers and this preventive solution can go a long way to put the hackers out of their jobs.

Opera is not the browser with a large market share, but it definitely has been on the forefront of innovation. It has provided innovative features with great agility and added features found in other browsers very quickly. Opera is available on nearly every Internet enabled platform, and it makes sense to add this new layer of security.

Wednesday, June 4, 2008

Practice Download Day: Firefox 3 RC2 Released

Firefox 3 download is coming closer and Mozilla has just released the latest version of Firefox 3 RC2. The version is faster, smoother and uses lesser memory compared to previous versions. Firefox RC2 is the last one before the final release and is a good time to see if all the bugs are ironed out.

Probably another important thing is to test the server capacity for the download day. Today's release may be useful for the download. Download it from here and Have Fun!!

Tuesday, June 3, 2008

Adobe Buzzword Could be the Google Docs Killer

Online word processors have been used by a lot of people. Document collaboration and sharing is the most killer feature of these online word processors. But till date no online word processing software has caught up with Microsoft Office in terms of good looking UI. Google Docs and Zoho are the biggest in the online word processing market, but yesterday Adobe launched Adobe Buzzword online word processor.

Adobe Buzzword is a flash-based online word processor, with an excellent looking user-interface. The User-Interface is not just graphically good-looking, but it is as fast as Google Docs UI. Buzzword is fast and intuitive. Buzzword also features collaboration, image embedding, table creation and everything else that you come to expect of a good online word processor.

AdobeBuzzword

Buzzword has good fonts and you can get a PDF out for printout. It is an excellent word processor if you want to make PDFs and send it out to others without collaborating on documents. Adobe Buzzword makes use of Adobe Flash 9 and its such a small runtime, that you can start using Buzzword nearly instantly and it doesn't feel any different from a desktop word processor. Buzzword is still in beta and you have to signup for the beta from here.

Along with Buzzword, there is also another use private online chat room from Adobe called Adobe Connect. It is also in beta and can be used to do online meetings and drawings on runtime. Adobe is doing some excellent things in the online and Google should watchout for Adobe, since it is really catching up on the web.

AdobeConnectNow

Monday, June 2, 2008

3G iPhone Expectation Decreases Market Share

The latest smart phone market share data released by IDC shows that Apple's iPhone has lost market share. On the other hand RIM's Blackberry has gained market share. Does this mean that Blackberry is picking up back against the iPhone? Is the excitement about iPhone coming to an end?

If you look at the facts it seems like Apple is loosing its lustre. iPhone has a market share of 19.2% in Q1'08 compared to 26.7% in Q4'07. At the same time Blackberry has increased its market share from 35.1% in Q4'07 to 44.5% in Q1'08. Thus it seems Blackberry has a victor in the making. IDC analyst Ramon Llamas said the BlackBerry is now strong in the "prosumer" segment, as RIM has successfully widened the appeal of the device beyond the professionals who have been its core customer group.

Actually, the fact is that iPhone excitement is just about to reach its peak sometime later this year when Apple Application Store will come into being. Apple has released its developer program and developers are in the process of finishing their applications. A lot of prospective buyers of the iPhone are also waiting for a 3G version which Apple plans to release in a few months time. With a really small timespan left, Apple will be up again and I think will be capturing all the lost market share.