So yesterday evening, I started writing a Google Chrome extension for my company. The documentation was easy to find and I started simply creating a manifest file and using the example of pulling flickr pictures. Took me about 5 hours but I finally managed to get a decent working prototype working.

I am being told by people that I might get hate mail from Firefox users but at the moment, writing Chrome extensions in Javascript is much much easier than Firefox extensions.

I think I can safely say that any future JS problems I might have, can very well be solved by Daniel Hall.

I was trying to create a Google Chrome extension for something at work and I wanted to link out of the pop up using standard href tags. This did not work so I tried attaching event handlers which resulted in the click method being called at bind time rather than at the event.

link.addEventHandler("click", callbackfunction(args));

What I did not know was that by passing arguments to the call back function, I was inherently calling it rather than binding a reference of it to the event handler. This was explained to me by Daniel, who I should add, is a very capable system administrator. And now JS expert.

Recently I had the chance to evaluate a lucene index and perform queries on it outside of tools like Luke. For this we used a rails project and used JRuby as our language as that allows us to import java packages.

The analyser which Luke uses by default is the Standard Analyser. This proved very slow for queries over fields like email addresses or dates. I switched to Keyword Analyser which was meant for fields like the ones mentioned above. After this change there was a marked improvement but not enough. So when Steven Bearzatto mentioned about Whitespace Analyser, I decided to use it. And immediately there were performance gains. These gains are subjective though, based on the field being queried.

Of the languages that I have used in my professional career, Ruby is the one that makes testing easy and fun. It has an easy to understand testing framework and mocking out external classes, most of the time, is very simple. For example:

class Sauron
__def wants
____"the ring"
__end
end

def Gandalf
__def wants
____"peace"
__end
end

class MiddleEarth
__def who_wants_what
____if (Gandalf.wants == peace and Sauron.wants == "the ring") then
______return "war"
____else
______return "peace"
____end
__end
end

Lets say that we have to test out Gandalf or Sauron.
Gandalf.wants.should eql "peace"
Sauron.wants.should eql "the ring"

The above test code is called RSpec and is very simple and easy to understand. Whether it is a Business Analyst or a Project Manager or even your mother, she can fully understand the required behaviour or Gandalf or Sauron.

But what if we had to unit test MiddleEarth? Then we would have to mock Gandalf and Sauron together and alternately to test all code paths. RSpec makes that easy.
Gandalf.should_receive(:wants).and_return("something")
Sauron.should_receive(:wants).and_return("something else")

If either method of either class expected arguments, that too we can deal with.
Sauron.should_receive(:wants).with(something).and_return("something else")

So there you go, testing and mocking made inherently easy with Ruby and its testing frameworks. But how does it compare to the old language of Perl. Perl too, now has a very similar testing framework to RSpec, called Test::Expectation. And for mocking out external influences when unit testing, we have Test::MockObject and Sub::Override.

And if we wanted to business test in either language, cucumber can sit on top of both, although inherently it works much more readily with Ruby/RSpec.

This thursday and friday my company , held its quarterly innovation day event. This is similar to Google’s own Innovation Time Off or Atlassian’s FedEx days . In short, we as employees got to propose ideas beforehand and then work on them for two whole days. Towards the end of friday, we got to present our efforts in front of the company and the best ideas got a business impetus to be developed as a company feature. The result of the last innovation day was the open sourcing of the page model developed in house called Gizmo by Luke Cunningham.

So last thursday, I decided to join forces with Geoffrey Giesemann on his idea of visualising suburbs and their corresponding information onto google maps. In this, I hoped my recent knowledge of their V3 API would come in handy. So we started at about lunch time on thursday and we split the task on basis of front and back end. The back end became Geoff’s task and he decided to write a rails application to load the monstrously verbose and complicated suburbs’ file and parse to JSON in a neat form. (Neat here being defined by the lack of extraneous data). My task became the front end. That is, to mock the JSON data that Geoff would provide and set up the map to display it. And thankfully, my knowledge from the recent devfest did come in handy 🙂 .

I started with a simple map and ensured that I could load the defaults and set up a single marker. This was fairly easy as the new API is very intuitive to work with. Then came adding additional controls to the top left of the map, which we decided would be the capital cities of the states. This would allow us to go directly to the geographical centers of the states, or centre of Australia, if we wanted. The mock data simply comprised of suburb name, its geocodes (latitude and longitude) and the name state it belonged to. I started out with a hundred suburbs and loaded them, using default markers in firefox and the whole thing was working neatly. I set up the info windows and was disappointed when I found out that the content was simple HTML(I was hoping for more controls, they are fun). The fun started when I loaded all the suburbs from Australia that I could find (in this case 15,000).

The fun was that with 15000 suburbs, all my browsers crashed; Firefox, Safari, Chrome. Someone suggested lynx but we had to decline, even though ASCII representation was tempting 😛 . So we decided to use clustering. Now V2 API used to have a MarkerClusterer, but for V3 Fluster2 was recommended. And rightly so. It was very easy to use and even easier to understand; even for someone like me who has not done much Javascript. But even after we clustered the suburbs, only Chrome could manage the load, probably due to its V8 engine. Once we dropped it in, the result was as follows:

Then we added custom markers for suburbs based on their state.

custom markers on hobart

These markers were obtained from google’s map icon repository.

and some information on the InfoWindow

info window over darwin

and our front end was complete. Geoffrey soon provided the the actual JSON data and after optimising our JS code to handle such a large load, we had a very nice application.

P.s Particular thanks to Daniel Hall who made me understand why the ‘A’ in AJAX stands for asynchronous.

So for the last two days I have been in Sydney at the Google office enjoying the developer fest.

It has been quite fun and I must say that their new API’s on Buzz and Maps (v3) are very exciting. My friend and I hacked together quite a neat little real estate app for buzz yesterday and managed to present it; it was quite well received. I also met a whole heap of very talented people from many many places with weird new ideas.

I got to talk to developers from Google who worked on Fusion, Buzz, Wave, Google Earth and the Directions API. Got to see a whole heap of new setup in the API’s and also how performance bottle necks have been identified. Found out how they test a lot of JS (especially on maps when they moved to v3 from v2).

All in all a great day’s fun. Learnt heaps and enjoyed it so much 🙂