Tuesday, December 04, 2007

XP Days conference

Here a few lines describing on my impressions of the XPDays Benelux conference that took place in Belgium a few weeks ago.

Organization of the conference
With 115 participants, and 4 parallel sessions, the conference had a friendly and personal atmosphere. It was also very well organized. At the beginning of each day, the presenters had 60 seconds to stand up and "sell" their session. This made it easier to choose among the 4 parallel sessions.

Product owner
In one of the hand-on sessions, we learned how important it was to have a product owner (PO) closely involved in the project. XP and Scrum talk about "customer on site". This point was also mentioned by other participants in informal chats. It became clear that having a readily accessible PO, someone capable of deciding on and prioritizing the product feature set, made a big difference.

Retrospectives
In my humble opinion, one of the most powerful ideas from the XP/agile world. Basically, it means that the team members take the time to reflect on the various processes and improve upon them. Retrospectives happen frequently, differentiating them from project post-mortems. At the end of the first day of the conference, the organizers had a retrospective on the conference itself, improving it on the fly.

TDD (test driven development)
Excellent development practice but which can end up warping your mind. I thought I was practicing TDD for some time but apparently not well enough according to the opinion of the purists. Supposedly, you have to make a consistent effort so as to come up with the tiniest possible change on the implementation barely sufficient to make the tests pass. It made feel like my mind was in shackles. Apparently, you get used to it. I hope I never do.

In other sessions, I have learned that tests can be considered as a specification. As such, the test phase is more akin to design.

To write maintainable tests, you can start by asking yourself whether by (only) reading the test code one can come up with the solution, i.e. implementation. Once you do that, you can start viewing the test code as the origin of the implementation.

Teams
Teams need tome to gel. The arrival or the departure of a member will disturb the team dynamic. Some people talk about a new team after any change to the team. It may sound extreme but I think there is some truth to it.

Agility and co.
I was surprised to discover that agile methods require a lot of discipline. XP and Scrum define detailed procedures that some people follow religiously. The no-compromise/no-prisoners-taken/all-or-nothing approach of certain participants seemed disturbingly martial, on the verge of the intolerant.

Having said that, there are many excellent ideas brewing in the Agile world. Next time you stumble upon an XPDays conference in your neighborhood, I'd recommend that you attend.

Sunday, September 02, 2007

Yet another choice

Recent adoption of the SLF4J API by Tapestry and Howard's blog on the subject has triggered a frenzy of comments, most of which were very favorable with the exception of Dion Almaer. Dion ridicules the unholy habit we J2EE developers have of trying to abstract every little API we might come into contact with.

I am inclined to agree with Dion but for different reasons. Writing a good abstraction layer for two or more distinct systems takes serious effort. I'd go as far as declaring that the task is impossible unless the systems in question are very similar or owners of these systems unconditionally submit to the authority of the abstraction layer.

In the case of log4j and java.util.logging (JUL), Jakarta commons-logging (JCL) was only able to partially abstract the underlying APIs because their core APIs are similar both conceptually and structurally. However, JCL was not able to abstract parts below the core API. For example, the JCL does not offer any help with respect to configuration of the underlying logging system. SLF4J fares only a little better, in that it offers abstractions for both MDC and Marker, in addition to the core logging API.

JDBC can be cited counter example of a successful abstraction layer. However, it is successful insofar as the RDMS providers submit to the authority of JDBC specification. They all go out of their way to implement a driver compatible with the latest version of the JDBC specification. Moreover, RDMS applications already share a similar structure by way of SQL.

When the systems differ substantially, it is nearly impossible to bridge the gap. Is there an abstraction layer bridging relational and OO databases? I think not. The relational/OO impedance mismatch gave birth to major development efforts. Take Hibernate for instance. Would you dream of writing Hibernate as a weekend project?

So why did JCL, with all its warts, catch on like wildfire? Because JCL provides a convenient answer to the log4j vs. JUL dilemma faced by authors of most Java libraries. The dilemma does not exist in other languages because there usually is one predominant logging system for the language. In Java we have log4j getting most of the mindshare, with JUL looming in the background, not much used but not ignorable either -- hence the dilemma.

Anyway, Dion has a point. We, in the J2EE community, do indeed waste too much time dabbling in secondary matters such as logging, but we only do so because we have the luxury of choice. We can chose between log4j, logback or JUL as our logging system. We can choose between Ant, Ivy or Maven for our builds. We can choose between Eclipse, IDEA and Netbeans for our IDE. We can choose between JSF, Tapestry, Spring, Struts or Wicket as our web-application framework.

Making choices takes time and effort but it also exerts a powerful attraction on our psyche. When presented with the choice, programmers (to the extent that we programmers can be assimilated to humans) will prefer the situation where we can choose between multiple options than the situation when we are presented with only one option.

Java presents us with more choices than any other language, probably because it is also the most successful language in history. Of course, you already know that successful does not necessarily mean best.

Anyway, I am quite happy see SLF4J being adopted so massively.

Friday, June 29, 2007

GIT vs Subversion

Linus Torvalds recently (2007-05-05) gave a presentation about GIT at Google. The video of the presentation is available on youtube.

In this particular presentation, I found Linus to be opinionated and rather unconvincing. He is extremely critical of CVS and Subversion. While GIT may be well-adapted to Linux's development model, I believe Subversion get the job done in other environments.

Martin Tomes, in his comments about GIT, nails the point. GIT and Subversion aim at different development models. While not perfect, the classical (centralized) model works well in both large and small projects, open-source or not.

The GIT project publishes a detailed albeit biased comparison between GIT and Subversion. The comparison makes a convincing case on why GIT offers better support for merges. The same page also mentions that the user interface for Subversion is better.

Monday, June 11, 2007

Selling YAGNI

I am quite fond of the YAGNI principle because it helps me concentrate on the essentials of the application currently under development. Another explanation is that I am getting lazier with age.

YAGNI tends to sell well with developers. It prunes needless work. However, with customers who ask for features, the YAGNI principle does not sit quite as well. People in general do not appreciate their decisions to be questioned and YAGNI can be resumed to one question. "Do you really need this feature?" The answer is often yes, forcing the skeptic in me to repeat the question in perhaps a modified form. Most people, customers included, do not like to be challenged, especially if done with some insistence.

Pruning requirements to mere essentials takes both work and courage. In the eyes of the customer, the alternative, i.e. asking for potentially useless features, may often look both easier and less risky.

I try to use the arguments as advocated in the c2 wiki, the feature that is implemented by anticipation now may be radically different than the feature actually needed in the future.

So how do you apply the YAGNI principle in a real-world environment? What are the arguments that may sway your customers or fellow developers?

Tuesday, May 29, 2007

Evolving a popular API

Authoring an API which later becomes popular can be both a blessing and a curse. If your design was imperfect, which it is bound to be, you will be frequently flamed for flaws. Except for the most trivial systems, it is outright impossible to get an API right the first time. You will several iterations to perfect your design.

Take Tapestry for example. It has evolved over seven years and five iterations to become what it is today. Unfortunately, some of these iterations were not backward compatible, thus purportedly negatively impacting Tapestry's adoption rate.

Offering a painless migration path to users may be necessary element to keep your existing user base, but as any developer who has attempted to preserve 100% backward-compatibility will tell you, such an ambitiuos goal will quickly begin to consume eons of your time.

Unless you are Microsoft or some other entity with serious resources, you will need to make a choice between 100% compatibility and innovation. In my experience, you can't both improve your design and keep 100% (absolute) compatibility.

However, if you aim a little lower than 100%, you can keep evolving your API without severely impacting your existing users. Most APIs have parts intended for internal use and other more public parts intended for use by the wider public. Changes to the internal parts may effect a handful of users, say one out of every thousand users. In contrast, changes to the public API will affect every user.

If such a distinction makes sense for your API, confine your incompatible changes to the internal part of your API. As mentioned earlier, these changes may affect a small proportion of your users, which may still number in the hundreds. Nevertheless, causing discomfort to a tiny minority of your users is still much better than a dead, i.e. non-evolving, API.

Friday, May 18, 2007

Dell delivers (not!)

My company has been a Dell customer for many years, having purchased four computers in the last 12 months alone. A few weeks ago we decided to purchase a new laptop, more precisely Latitude D620. This baby comes with an Intel Core 2 Duo clocked at 2.0Ghz, a 14" inch screen with a resolution of 1440x900 pixels. Most importantly, it weighs 2.0Kg (4.4lbs).

We signed the order on the 30th of April 2007 and paid for it on the 3rd of May. Tracking the order on Dell's web-site, we noticed that the order was not being processed. I contacted the sales person to inquire about the order. She said that as fat as she could tell no payment was received and that she needed proof of payment to look into the matter. After sending her proof of payment, it took another day for the accounting team to match our payment with our order. Nevertheless, with the payment glitch fixed, the laptop went into preproduction on the 9th, was finished the next day and shipped by UPS on the 11th with expected delivery on Wednesday the 16th of May.

Lo and behold, we received it on the announced date, at around 11 AM. I was quite excited to receive this new laptop as a replacement for my older Inspiron 5100 (also from Dell). After 4 years of good and loyal service, while my old companion still works nicely, it weighs a hefty 3.5kg (7.7lbs). Since I have to schlep it on foot for about an hour each work day, 1.5kg (3.3lbs) less weight on my back is something I was looking forward to.

Opening the package, all the components were there. Unfortunately, instead of weighing 2.0kg (4.4lbs) my Latitude D620 weighs 2.5kg (5.5lbs), that is a 25% difference compared to my order and Dell's own specifications. Contacting the sales person, she proposed to sell me 4 cell battery, purportedly lighter than the 6 cell battery I currently had. Unconvinced, I asked to speak to her manager and somehow got disconnected. Sigh.

The second time I called I was put in contact with a customer service representative, who, recognizing the problem, promised to replace my laptop with a model of my choice. Needless to say, I was quite impressed by Dell's generous offer. To good to be true, she called an hour later reneging on her previous offer, under a completely bogus pretext. Let me cut a long story short by saying that there is a limit to the amount bull this particular customer (yours truly) was willing to put up with.

How can Dell hope to retain customers when what they
deliver only approximates what they advertise ? One of the customer support people at Dell went as far as acknowledging that Dell was minimizing their laptop's weight to increase sales and that other vendors were also playing the same dubious game. One thing is for sure, we won't be buying another Dell product anytime soon.

Friday, March 02, 2007

Reading XFire code

The various SOAP and WS-* related specifications have the reputation of being tricky and difficult to understand. The latest project I am involved in requires a relatively deep understanding of WS-*. One way to gain understanding of a specification is by closely studying an implementation of it. Spurred by my previous pleasant experience with it, XFire happens to be that implementation of choice.

Anyway, checking out the project from SVN and building from the sources was fairly easy. A number of tests failed, but all in all, it was a breeze to get the various XFire projects nicely tucked under Eclipse.

From what I can tell, the code is a pleasure to read and it feels like it is the result of fairly good design.

Monday, February 26, 2007

Founders at work

After reading the first 3 chapters of "Founders at work" by Jessica Livingston, I can't help but recommend this book. Compared to many other books where the fluff in the narrative ends up diluting the content, the direct language of the various founders is both refreshing and inspirational. Each story if filled with unsophisticated yet brilliant ideas, each resembling a small gem.

I can't wait to read the remaining chapters.

Friday, February 09, 2007

SLF4J and logback gaining traction

It does not mean much me saying so, but the SLF4J and logback projects are gaining traction. The project mailing lists are showing real signs of life and community interest, whereas the download statistics are showing significant upward trend.

We are not at the same levels of popularity as commons-logging or log4j. Nevertheless, it is very encouraging to see users responding favorably to our work. It feels like the early days of log4j, and that's pretty damn exciting.

Thursday, February 08, 2007

Advantage of open source

I recently had to use two very comparable products, one open-source and the other closed-source. While the closed-source product had more verbose documentation, but I actually managed to get the open-source product running and not the closed-source product.

More importantly, the API of the closed-source product, while very similar and accomplished the *identical* task, felt awkard. I guess that bouncing ideas off users and listening to what they have to say makes a real difference at the end.


Although clearly at a commercial disadvantage, an open-source project has a structural advantage at creating a better product. Of course for really large products where the combined efforts of dozens of programmers are needed for prolonged periods, closed-source remains a valid alternative.