Tuesday 13 October 2009

No liars please, we're testers.

I’ve been prompted to write this after sifting through another load of dross CVs. I say another because I was recruiting heavily last year. A recruitment drive that took me twelve months to find just two test analysts.. hold that thought.

Its not that I didn’t get many applicants the first time round I did, hundreds in fact (quite literally) and I duly read and scored every CV personally and feedback too. Not only that, I also had two of my peers review the CVs too. You cant ask for more than that. If two out of three agreed, the decision was final.

The reason it took a year was down to the quality of the CVs. At least half of the CVs we read didn’t state the daily tasks that you would expect a career tester to be doing. Only if you were playing at being a tester would you make that mistake. Unfortunately sometimes the CV would tick all the boxes and we would invite the applicant in for interview. Imagine their horror when they were faced with a simple exercise to test their SQL skills even though on their CV they had waxed lyrical about how they practically spoke to their friends in SQL because they had used it so much they were fluent.

“so I have a table called customer, with the fields ID, Name, Address. How would I get all of the records form the table?” you should see some of the answers. Its often hard for me not to shout “just give up, I know you haven’t got a bloody clue despite what it says on your CV” they muddle on “GET ALL RECORDS FROM TABLE WHERE NAME == CUSTOMERS AND ID….”

Imagine also them recoiling in their seats when I ask them to draw out the V-model and label it.
“What’s that you say, you don’t know how? But here on your CV you state you are a senior test analyst and have and ISEB foundation certificate, how do you not know the V model?” they shuffle their feet and mumble how they studied at home, well not actually studied so much as bought a guide on how to pass that’s full of example questions.

I watch in wonder as their faces contort when I ask them “so what is the difference between white box testing and black box” I let them fumble through telling me how they have used both of those “methodologies” and I follow up with “can you give me some examples where you white box tested?”

Then come the questions on web testing (its what we do after all) “so whats a cookie I ask”, the smile, easy they think “it’s a virus you get from visiting sex sites” oh my! What should I do as a tester? “never ever accept cookies, they track all your movements, like little spys in your computer”.
I follow with a simple exercise about shopping carts and sessions to see if the candidate understands why a cookie maybe important here. “the system gets all that info from the cookie” but how did it get in the cookie? “from the internet” can I see it, I want to see my cookie “oh no you cant see them, they are secret”.

I have even had to terminate several interviews because it became apparent very quickly that the applicant in the chair didn’t actually know what was on their CV because they had just copied it off a (delete as appropriate) friend / colleague / LinkedIn profile.

It was so bad in the past that we setup an online quiz. It was very simple, multiple choice, some questions around testing, some around our domain. Some very easy questions “Which of the following is a search engine” with an obvious answer “google”. We discovered a side effect of an easy question like that is that we could see how fast the candidate answered the question when they knew it straight off the cuff (about 9 seconds for that question) and compare it to a testing related question that took 3 them minutes to answer (did they have to search for the answer?). The test was very easy for a career web tester, but not so easy for an IT support person or BA or even a developer who fancied a move into testing. Its only real purpose was to filter out the complete time wasters.

So here I am again, I’m hiring and I’m inundated with CVs and again 50% are pure wasted bandwidth (I don’t give them the luxury of printing them out). But this time I don’t have the online test, and I’m gnashing my teeth at some of the incredulous stuff in these CVS. Some of them read like horrible blog postings “on this job we had this challenge and so we had to X because of Y but then Z happened and so we used plan A…” blah blah blah “then the business wanted B but I wrote the very detailed spec of C” bletch grrr spit pfftt, its all I can do to stop myself posting these fetid monologues online for no other reason than ridicule and I hate myself for it.

So faced with the prospect of interviewing a load of (lets be blunt here) bullshit artists only to show them the door at the end of it isn’t a prospect I’m overjoyed with. I don’t want to spend two hours of my life demonstrating why the candidate is a liar. I don’t want to be associated with these bottom feeders in a professional sense either. I loathe them, and I loathe the arseholes that gave them a “consultant” badge at Logica (or any other faceless body shop), because now they think they are gods gift and we should roll out the red carpet for them.

I will continue to sift through the dross, the cream always floats and that’s what I’m after, the cream, the crème de la crème.

So if you are interviewing a tester that tells you that you gave them a much easier ride than in a previous interview they attended, you know I rejected them, and that you may want to make use of that probationary period.

But if you’re a testers who’s CV isn’t straight up and down, you may want to rethink applying for a job with me.

Oh and by the way, don’t put “I have a keen eye for attention to detail” then litter your CV with spelling mistakes poor grammar and mixed styling!

Monday 12 October 2009

Hokey Cokey or Hocus Pocus

Back in September 2007 we released a new version of our search application.
The new version was step change for us. At that time we were powering the core of our search offering with an Oracle database, and a Java application that returned flat html. It was all very web 1.0, and we had begun to see issues with the performance of the site and discovered that throwing 8 more servers into the oracle grid didn’t gives 8 x more power. We took the oracle database out of the mix and brought in Endeca search.

The Endeca API allowed us to show the visitors how many of the things they were searching for were available before they submitted the search from. For example, if you were searching for a BMW 5-Series, the fuel type drop down on the search form would list the number available next to the drop down [Petrol (5), LPG (2)]. So a big change to the “build your search and submit it and hope it returns results” model we had previously used. To be able to allow this feature to work we had to use Ajax, or more specifically JSon. So as the user changed their criterion the relevant drop down were updated without refreshing the form. So like I said a step change for the front end, the back end and user behaviour.

The new version was released in stages, inviting visitors to the site to try the new version. This tactic has its own associated problems (for example only a certain type of person will follow a “try our new X” link, so your new application doesn’t get exposure to a good representation of your audience), once the visitors had interacted with the new search form, we invited them to give us some feedback, so that we could improve on what we had we had done. Below is a selection of that feedback:

It crashed 5 times and slow.
Takes longer, too complicated, should be kept simple
Too slow!!
Not as easy to use
Very slow to bring up menus, Spent time waiting.
It doesn't work – my searches keep spontaneously disapearing (Cars)
is slow. maybe is my broadband problem.
I don't want to know how many cars in the uk, I just want to know how many in my local area
It's silly to have to click 'show your results' it was better on the previous version where it showed the results.
Too slow in uploads.
More criteria = more time.
Too many details to put in .
More options, as not 100% encyclopaedic knowledge of cars, the sub model option was difficult .


So, pretty damming stuff. But something didn’t make any sense. We had rigorously tested the performance of the system and were confident that it was faster than the old system. The Market leading browse back then was IE6 and given that we had engineered or built it for IE6 it positively flew in Opera or FireFox. So we were perplexed. That is until we did some usability testing (I wont discuss the fact that the usability testing was too late in the project to be really beneficial).

The usability testing did allow us to understand why we got so many “slow” comments on the feedback. Faced with all the feedback the new search form gave the users as they refined their search, they believed two things, 1. That they had to fill in all of the options and 2, that they couldn’t interact with the form until the animated counters stopped moving.

Manufacturer, Model, Variant, Trim, Colour, Fuel Type, Mileage, Age, Min Price, Max Price, Distance from the visitor. As the user slowly changed each of the drop down controls on the search form, some options would become unavailable (greyed out). This was because the back end had contracted them out of the possible results. If no Red BMWs were available, the colour Red would not be available for choice on the drop down control. So the user would change say model to 3-Series and find there wasn’t any Red available on the drop down, so they would back up and change 3-Series to 5-Series and so on. They didn’t realise you could just search for all the Red cars within 20 miles of their house, and drill down from there. To some extent they still don’t 2 years on.

It reminds me a little bit of when I was working on a project with BT and the then new System-X exchanges. The switches could support loads of (then) new features (things we take for granted today, like 1471 in the UK). Being a geek I was amazed at what I could do with a DTMF (touch tone) phone, and went out immediately and bought one. The next day I asked why BT hadn’t publicised any of the features and capabilities. Their response was immediate, dry and serious. “Our users won’t understand them”. I can still remember how I felt, almost like I had stumbled into some great conspiracy. BT wanted to keep people in the dark, and protect them from the nasty technology that might confuse them.
It was several years later that I received a booklet with my phone bill that explained the features and how to access them. Having used the features for sometime at hat point, I had great difficulty in understanding the booklet. Maybe BT were right, maybe it was all too confusing.

Fast forward to now, and my current project. Again another release and another step change. This time the look and feel of the site has been overhauled. The backend is still Endeca powered, but the Java app has been completely rewritten. And in the rewriting of the application we have taken the opportunity to bake testing in from the start. The JavaScript, cascading style sheets and html are all tested automatically. Regression should be a thing of the past (but that’s another blog post), the application has unit testing, functional and non functional testing applied at every check in. The functional testing has been expanded into “User Journey” testing, in which likely user scenarios are played out. All of this happens automatically before the application reaches QA. Then the QA team goes to town, with time to exercise their real skill, exploratory testing. So there you have it, never in the history of our company has a product been so well inspected. So we felt pretty confident when we were ready for Beta.

This time round, instead of us inviting user to try out our new site, we employed AB testing. 5% of our traffic was diverted to the new site, and once again, users were invited to leave feedback. I took the opportunity to setup a Google alert for spotting the use of the beta url in forum or blog posts, so I could keep track of what the community was saying.
Once again the feedback came in…

The used car search, the old one is much clearer to use and a lot better, . The new "improved" one is poor.
Preffered old site looked more proffesional and was easier to use.
The search criteria should be your main focus and keep that in a clear box format like your old site and allow people to search quickly but also as specifically as they want.
The old site is much better the new site is more complicated to use in the end I shut it down and went on to ebay please change back.
It looks much better than the previous website, but since I dont live in UK, I usually have to copy and paste the London postcode from the FAQ page. Unfortunately, I cannot find the page.
Bad design. Not as easy to use and selct options, not as clear and concise. the old one was perfect.

Erm, what? The old one was better? Perfect? Now we are confused.

So again we tackle the perceived issues of our users. We keep seeing comments of missing images, and we start pulling apart the application, the infrastructure and network. It turns out it was an Ad Blocker, that has decided that the way we format our image urls (cache busting) means they are adverts and blocks them.
People complain of slow loading time, so I begin to conduct some testing around that. I conclude they maybe right, so we engage with Gomez to find out for sure. Gomez shows something alarming. Users on a half decent (2Mb and above) broadband connection will get a decent experience. People on anything less are going to be pulling their hair out. The digital Britain report suggests that most of the UK has 3Mb broadband, so do our users just have slow connections? Regardless I have begun some work into improving the perceived page load times, and will roll those requirements into cross cutting requirements in the same way as we do for SEO and DDA compliance. We are going to lighten the page weight and strip out the heavy JQuery that is only used to titillate. We are going to build our own analytics into the front end that will allow us to see in real time what the users experience (current render times etc), we are moving some of the content so that it resides under a new host allowing it to be fetched in parallel by the browser. All of this should help the usrs with slow connections.

But what about the “it crashes my browser” comment? Our in page analytics trap JavaScript errors and reports them. And while our users suffer at the hands of errant JavaScript squirted into the page by 3rd party advert traffickers our own code is solid, so what’s this crash?

We contacted a customer who had left his details and asked him if he could walk us through the “crash” and we would follow along step by step in the office. At the point where he claimed his browser had crashed, we were watching the light box “melt away”, something we had designed in. His expectation was that the light box would work like a tab, and that he could tab between the photos and the detailed specification of the vehicle. Not melt away to the bottom of the screen. So now we will remove the animations on the light boxes (and other objects).

What have I learnt?

Three things:

1. Next project, I’m running the usability testing, with real scenarios and everything.
2. Perceived performance is more damaging than actual performance.
3. BT may have been right...

Friday 18 September 2009

How will we preserve Twitter, Facebook or LinkedIn?

On Tuesday i attended a talk by Doron Swade at the MOSI for the Computer Conservation Society.

Doron Swade is an engineer, historian, and museum professional, internationally recognized as the authority on the life and work of Charles Babbage, the 19th-century English mathematician and computer pioneer. He was Senior Curator of Computing at the Science Museum, London, for fourteen years and during this time he masterminded the eighteen-year construction of the first Babbage Calculating Engine built to original 19th-century designs. The Engine, was completed in 2002.

Doron was talking about the historical and cultural issues in the history of computing that he faced at the Computer History Museum in Silicon Valley, California. When something struck me.

The big kids on the block today are not the computers but the programmes they run, the software. There hasn't been any significant advancement in computing hardware for some time. However the internet is changing the way we communicate and socialise and somehow we will need to preserve it for historical interest.

But how on earth will we preserve software like Google, Twitter or FaceBook or MySpace? Because the software that powers these sites is only part of the puzzle, because with these sites its the content that makes these sites what they are. Terabytes of user generated content. How can we preserve that so that in 60 years we can look back with the same fondness we look back at the Manchester Baby, the UNIVAC or the IBM360??

Once you have wrapped your head round that task, who will test such a system? and how will the ensure that its a true representation of what those sites look like today?

While i sat there among some of the early pioneers of British computing who were gently dozing off i wondered if one day, i would be sat in that room, while tomorrows Doron tells me the problems faced with ranking the websites, facebook before twitter, or linkedin before Myspace?

Wednesday 9 September 2009

Automatically shipped by LoudTwitter

Monday 7 September 2009

If you are not thinking about perfomrance, you are not thinking

Performance testing is a funny old thing, in that whenever the subject comes up, people get all hot and bothered about it. The thing that really tickles my fancy is when developers suddenly get righteous about testing!

Testers and developers have a totally different view of the world. The best testers i have worked with have a real need to dig into systems. Even with black box testing they find a way to work out what a systems does way beyond its simple inputs and outputs. They cant help themselves. It is almost like they cant pass go if they don't break the system, almost an addiction (or is that affliction).

Now that the developers find themselves writing unit tests, integration tests and acceptance tests they think that overnight they have learnt everything there is to know about testing, right? Wrong!

Yes, sure a developer can write a test, but they often struggle with the intent of the test an more so with non functional testing like performance testing. let me shake it down.

Ok, so the business wants to monetise their existing data by presenting it in a new way, for example "Email Alerts", you know the sort of thing. You create a search, and when your criteria are met you get sent an email.

The developer sits down to think about performance testing, and thinks about how the system works. In our example here, the system will fire the searches every night, when the database is relatively quiet so that we don't overload the system during peak hours.

So the developer thinks OK I'll create a load of these "alerts" using SQL inserts and fire up the system and see how fast it can work through them.

They do just that and get back some statistics like, number of threads, amount of memory the JVM consumed, how many connections to the DB were needed, how many searches were executed, how long it took to execute a search, that sort of thing. They call meetings and stroke their chins in a sage like way. The figures look about right.

But in real life the database would never have the alerts inserted into it in that way. Its probable that users would be inserting data at the same time as it was being read out. Also the product isn't likely to go live and have 100% take up over night. Its more probable that the take up would be slower, perhaps taking weeks or months never achieving 100% take up. Old alerts would be expiring and some users would renew those while new ones are being created while some other are being edited (change of email address etc).

The crux of the matter is mindset. The tester sits down and thinks, what could go wrong? What happens if the DB is unavailable at the time the batch job runs? What happens if the DB needs to be taken down for maintenance during a batch run, will the batch pick up where it left off? Can the batch job complete before the off peak period comes to an end? Can the mail server handle the number of emails to be sent? What happens to email that bounces? In other words the tester takes a step back and looks at the system holistically. Because a user doesn't give a damn if your search engine can execute a query in 33ms if they don't get the email until 12hours after it was relevant.

Now on the current project we have completely rewritten the platform. New version of Java, new styles of writing code, new infrastructure etc etc. The search engine technology is the same, however during the life of the project the API has been updated and a particular feature enabled. This feature allows us to search in a related way. Generally speaking it allows us to do the Amazon type thing; "People who search for X also search for Y", but it comes at a cost, it takes longer to return the result set (of course it would its got to do another search under the hood).

Again during "testing" the developers just threw as much load as they could muster at the application. But guess what, now its live the figures they got during testing don't match, not even close.

It isn't like hadn't been bleating on about performance for months. I even printed out posters that read "If you are not thinking about performance, you are not thinking" and stuck them above the urinals in the toilets.

Its only now that the team are in the spotlight (its so bright it burns) that they have come to us to ask for our help. Once again we are trying to polish a rough product instead of building the quality in from the start. Once again we cant.

It doesn't matter a damn that we went all out Agile, TDD and XP if the thing doesn't perform. The end user doesn't care that we have continuous integration, they know its damn slow and they vote with their feet (or keyboards/mouse).

Friday 4 September 2009

Skills not Roles - Communities not Teams.

When i decided to put this blog together, part of the impetus was using it as a historical repository. There are a great many of my posts on the internet today that date back to 1999. Ten years is a fairly long time in internet years, and looking back over those posts i can see how my understanding of different topics has evolved and how much I've learnt and grown. That said, its not going to work (a historical record) if i don't post anything is it!

The reason i haven't posted for quite some time is two fold. Simply put I've been too busy at work and too busy at home. That is to say, the blog has had to take a back seat. I'm sorry I'll try harder.

OK on to the post proper.




So this post will differ in that i'm not going to discuss technologies like Twist or Selenium or WebDriver.

I want to talk about something that is currently crippling the project I'm currently working on.

For a project to be truly Agile and Lean it needs to be able to respond to the challenges that we face daily in IT and overcome the challenges without waste. So why then do we have to hand off tasks like deployments to another non Agile team? Moreover the handover is done via an abhorrent "work-flow" tool that admonishes the receiving team from any aspect of quality or responsibility "I've done my bit mate, its with team X now".

I want to get rid of teams as we know and recognise them today and usher in communities. Yeah sure the name is a bit hippy-ish but then so is the ideal. Skills not roles. If someone within our delivery community has the necessary skills to deploy some code to a database or server then why do we have to interface with an external team? If we have the capability, and we are responsible, what's the problem?

OK, sure, the guys who look after the production systems want to achieve 99.999% uptime (26 seconds of downtime a month), and often they are targeted on this so they become averse to change. After all any change increases the risk of a failure. However, if we have tested the code, not once, not twice, but umpteen times and more importantly we only compiled the code only once, and all previous deployments have gone without incident. You could be forgiven for thinking that the deployment could be considered as being safe for deployment. A non event. We should be able to deploy the code at 17:30 on a Friday afternoon and skip off home safe in the knowledge that the site is up and running, humming along like a well oiled machine.

However, those teams have become so averse to change or risk as they perceive it, that they actually start to display behaviours reminiscent of the 1970 trade union shenanigans that plagued British industry "you can't do that mate, not your job. Not anyone can deploy code you know, oh no. where would we be if just any old tom, dick or harry could deploy code willy nilly".

As an Agile commune focused on delivery of our project we would share the skills and socialise ideas. We need to create innovative environments that promote people trying new things. This aids the members of the community which therefore benefits the business. So not anyone could deploy the code. Only those people that had the skills and were responsible in the execution of their duties.

The more i think about this, the clearer it all becomes. I suddenly find myself questioning my own role as a "people manager" within such a community. After all my role as in its current shape would be wasteful. I should not manage the team (to be honest, that's not my natural style) I should coach and mentor, not preach and target. I should lead by example, not autocratic rule. As Alan Keith of Genentech said "Leadership is ultimately about creating a way for people to contribute to making something extraordinary happen."

I've run this idea past my peers. Old peers agree with me, they see it as a way to empower individuals, and therefore the community they reside in. But younger, less wise peers are worried. "How will we administer pay grades" they ask, "how will we hire people, if we don't have recognised roles".

Its really quite simple. Individuals are rewarded for the skills they have not their ranking within a role. Why should an experienced tester with polygot skills and several years domain knowledge be paid less than a BA with flaky knowledge of your technology platform? What because business analysts traditionally earn more than testers? For that matter why should a developer be paid more than a business analyst if the BA can also test? Two skills vs one. The current game is rigged and is demotivational.

Hiring is also easy. You want titles for your people? Call them analysts. Then all you need to do is hire analysts with the appropriate skills for your domain and your platform. Other companies call their staff "consultants", and they hire consultants with the appropriate skills for the client they engage with.

Once you have a pool of multi skilled analysts, it would be easier to create a community that had the right skills required to deliver the project, instead of worrying about the interfaces to external teams or having a shortfall of a particular discipline within your community. You can select your community members based on their proven experience, their skills, their domain knowledge and the feedback they receive from the communities they have previously worked in. A community is unlikely to carry a lazy person who knows little about the domain and has few or poor skills.

Now i'm not saying we don't need SysAdmin or DBA or networks etc. We need all those teams, and what they do is invaluable to the delivery of the project. But do those external teams need to perform what amounts to mundane tasks for us? Shouldn't they concentrate on what's important to them, the stability and performance of their area. Because as it stands today at the point we interface with those external teams for the execution of a task that could be carried out in-team by an appropriately skilled and responsible analyst, they become a blocker and they become wasteful. We sit twiddling our thumbs while we wait for those teams to follow their internal processes and use the infernal work-flow tool (and its only real purpose is to provide the business with yet more meaningless statistics).

When i have approached the external teams, i have found that they harbour a fear of "Agile" and i think this fear is the real problem. They feel uncomfortable, anxious, or inadequate. They worry that by allowing us to be responsible for our actions then they will be allowing themselves to be exploited and by denying us it helps protect their rights as an individual.

The business feels the same way. Despite the corporate line being "we are Agile, lean, innovative..." they fear change to the point that they have implemented a change management process, and a change manager and recently said we have to use sharepoint (ffs). But what the business hasn't realised is that as a tester i am risk averse (no really its a curse), so we are actively baking the quality into our products and continuously inspecting them for that quality - through unit testing, integration testing, and acceptance testing.

It will be incredibly hard for the external teams let go, especially while the business is frozen with fear, but in the future they will have to. They will have to or we may as well pack up and go back to PrinceII - not while there is breath left in my body...

Monday 30 March 2009

Life after ThoughtWorks?

After working with ThoughtWorks I’m completely sold on Agile software development for the type of product we deliver. Which if you know me may be a bit of a surprise as I have a reputation for being a cynical old curmudgeon, and a staunch doubter to boot.

While I am still unsure if agile would work for large scale software projects (think air traffic control) I am certain that there are agile practices (lean for example) that could be pilfered and applied to those lumbering projects too. If you are reading this and you haven’t had any exposure to Agile, I guess the biggest thing I should tell you is that this isn’t some process you just pick up and run with, nor is it a methodology to apply. You can’t learn it from a book, or by a project template; it is quite simply a mindset and as such it presents the practitioner with quite a shift in paradigm. Simply put, you can’t flick a big agile switch and hey presto you are Agile; it is more subtle than that.

I think before I continue it’s important that I clarify something. The term agile is applied to a wide variety of processes, techniques, methods, tools, practices, projects, and phases of the development life cycle; it has become a buzzword used by people trying to paint their work in a new light (who doesn't want to be known as being agile?). It's important, therefore, to set out some basic definitions and context for the use of the term "agile," especially as i will use it constantly throughout this article.

Within the context of software development, the term "agile" (with a small "a") is meant to imply that the development team is nimble, flexible and responsive to the business needs, and that it is able to adopt new technologies and techniques that can improve software delivery. The term "Agile" (with a capital "A") refers to a very specific set of processes (and i use the term process as more of a place holder) applied to software development that have evolved over the past fifteen years or so; including some you have probably heard of, like eXtreme Programming (XP), Scrum, Feature-driven Development (FDD), Crystal, Dynamic Systems Development Method (DSDM) and Lean Software Development. A non-profit organisation The Agile Alliance was created by the ideas people behind most of the Agile processes. The Agile Alliance promotes a set of core values that a process must follow to be called Agile:
  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

So to be Agile then, a process must support these values (and more), albeit in diverse ways. Some, processes for example like Scrum, address team management, while others such as XP or DSDM, address development activities or other activities of the software development life cycle. It’s worth making a mental note that users of Agile processes do not have to follow all its Agile practices, and neither does the use of one process preclude the use of any other. One thing I learnt is that Agile supports such process change, if a particular way of working is not working then change. In fact many I found that many of the Agile practices are complimentary.

No matter your preference, all of the different flavours of Agile will deliver working functionality in short time-boxed iterations. They implement early and frequent testing. They require lots of involvement from the customer on a frequent if not full-time basis and they assume that the customers requirements will continually change.

So why all the fuss about big "A" little "a" when talking about Agile? There are two main reasons:
  1. Companies who adopt Agile processes have to be prepared to completely change the way they not only develop software but how they think, and this change is so big it is very hard to do. However companies that have achieved this shift will appreciate the significant benefits they can reap in productivity, quality and value of the software that they deliver. Notice i didn't say it would be faster. These companies are Agile with a capital A.

  2. Companies that are not able to embark on this level of change can still become more reactive and flexible in the ways that they build software. They become more agile and begin to realise the advantage that Agile can deliver. This toe dipping exercise can lead to a true Agile team, however the company must understand that it's not the prefferable way, and they would benefit from the gung-ho approach to Agile adoption.

To reiterate what i've said, at a minimum, every Agile process delivers working functionality in short, time-boxed iterations. Agile implements early and frequent testing, and it involves the customer on a frequent if not full-time basis; and assumes that requirements cannot be fully defined at the start of a project, and that the requirements will continually change. The ethos is simple that by using these practices, the development teams will be able to respond quickly to the ever changing customer priorities and feedback, and deliver value to the business. This is often missunderstood as being quicker, I would prefer to describe it as improved time to benefits.

As a tester I have become quite accustomed to being involved late in a project, often right before delivery. I have adapted and learnt how to cope with shortened time frames for testing, and receiving specifications and requirement documentation that don’t actually match the product delivered in QA. Typically the business (or for that matter the project team) usually has little interest in the test teams input, that is until they call us a bottle neck.

Feeling loved and appreciated.

Agile software development for a tester is radically different from the traditional PRINCE(2)/waterfall software development lifecycle (SDLC), because it throws QA right into the heart of project on day one. As Agile testers we suddenly found ourselves being involved in the analysis and design of the product. We became heavily involved in decision-making throughout the whole project and because the delivery of the software is incremental we found that we began testing at the very beginning of the project and that we had to maintain pace and keep step with development to prevent any delay. This is a far cry from waiting for a software deliverable (possibly unfit for purpose) to be thrown over the fence a few weeks before go live.

The paradigm shift I have mentioned is a large one, not just for QA but the whole project team, because our QA team now drives the entire software development process.

Quality assurance, with its focus on preventing defects, is translated into the agile practice of having committed QA resources on the development team that participate in decision-making on a daily basis, throughout the life cycle of the project.

Their input during elaboration and design helps developers write better code. More “what-if” scenarios are considered and planned for, as the collaboration between coders and testers gives the coders more insights than if they were to have planned the work on their own.
Likewise, the testers gain added insight into the expected functionality from the coders and the product owner, and are able to write more effective test cases for the product.
Quality control places its emphasis on finding defects that have already slipped into the system, and working with developers to eliminate those defects. This bug-checking is done within the iteration, using such techniques such as daily builds and smoke tests, automated regression testing, unit testing, functional and exploratory testing, and acceptance testing. Everyone participates – no one is exempt from the tasks of ensuring that the feature coded meets the customer’s expectations.

Your role morphs and evolves.

Through a methodology known as "Story Test Driven Development" the test requirements (aka the acceptance tests) are captured (we currently use Twist) in a test like format and they are then augmented to make them into automated tests. Nothing unusual in that, lots of test teams create automated tests, however here the automated tests are being executed by the development team not the test team and the tests exist before the development team have even created the code for the software the team is delivering. The real beauty of this method is that the development team can then integrate the automated tests into a Continuous Integration environment, where the newly checked in code is built and tested automatically. So the QA teams test suite is run against the code every time a change gets checked in. But wait, that’s not all. The delivery of code into QA can not happen until it has passed all of our tests, which means that we have built the quality in from the very start. Let me state that again, in case you missed it. The development team can not deliver code into QA until it has passed the QA teams tests. That’s a statement that should raise a lot of internal debate with any passionate tester who hasn’t worked in this way before, and it did with us. Its worth me making the distinction here that Test Driven Development (TDD) is not a testing methodology it’s a design methodology. Being testers we have a very different view of the world to a developer, its in very our nature. Working this way felt like we had harnessed our power and put it where it belongs, under the smelting pot of code.
I should also mention here that the developers were also working in a new way, because they worked in pairs (pair programming) and also used TDD at unit level. This means that a developer has to write a unit test before he writes the code for the unit being developed. That means then, that before its delivered to QA, our code has been tested twice. That’s two times tested.

So if development is running the QA teams tests, and the QA team are writing the automated tests (and its blurry line between coding and writing an automated test), and the code is tested before its delivered to QA, a whole load of question begin to bubble up.

Some of early questions we had:

  1. Who tests the testers’ tests?
  2. This is agile, so what happens if the requirements change?
  3. So what do QA test if it’s already passed the QA tests?
On a typical PRINCE(2)/waterfall project the testing team plans as far in advance as it can. We normally try to follow the IEEE 829-1983 standard and document as much as we can.
The documentation covers how we approach the testing and detail the testing activities. These documents are usually created in isolation from the project team and may be published for approval. However in my experience the documents are rarely scrutinised and any feedback given only serves to pay lip service to the processes. Another checkbox checked, and another Gant chart milestone met.

Working in an agile testing environment also has a requirement to define which tools and methods will be used for writing, executing, and reporting tests and determine the best approach to testing, and the scope of that testing. The big difference is that the whole team is engaged in this deterministic process and we found that it was important to engage the developers in this definition, because they would be executing our tests and writing their own unit tests. Moreover thought had to be given to automating the regression testing, something that would happen as part of the continuous integration process.
The business stakeholders were also involved in this process (unfortunately only by proxy through the business analysts, as they are physically located in a different part of the country) as they would help to define and run the acceptance tests. In agile, we (the whole team) all test, but the business accepts.

In short within agile practices, everyone has a contributory part in defining, upholding, and improving the quality of the product.

One of the gotchas was that we found that we needed to become more technical. We had thought that we were already more technically skilled than your average tester, however as we endeavoured to automate our testing we found that we had to skill up and learn not just how to write code (Java for us), but compile the code and version control it. They were steep, steep learning curves, which now we have overcome them, have empowered us, giving us the tools for a brighter, faster and more accurate future. Having said that, it does appear that some of this coding effort could be considered a one off setup task, because we now have a framework of “tools” that cover all of the tasks we need to help us execute an automated test. Couldn’t we have asked the development team to do the tech tasks for us? We did, and they didn’t have any resource free to accommodate our needs as well as those of the agile project. Regardless, we have skills now that we have been able to share with the wider QA team, and they too are seeing the fruits of our initial labour.

Traditional Tools Solve Traditional Problems in Traditional Contexts. Agile Is Not Traditional.

Traditional, heavyweight, record-and-playback tools (like Quality Center) address the challenges faced by teams operating in a traditional context with specialisms. They address the challenge of having non-programmers automate tests by having record-and-playback features, a simplified editing environment, and a simplified programming language.

But Agile teams don’t need tools like these (optimised for non-programmers). What Agile test teams need are tools to solve an entirely different set of challenges that are related to collaborating, communicating, reducing the waste (Muda), and decrease the feedback loop. Ergo, traditional (long standing) test automation tools just don’t cut the mustard in an Agile context because they are designed to solve traditional problems, in traditional contexts and those really are quite different to the challenges faced by Agile test teams. To make it clear, QC & TD isn't going to cut it in Agile.

At the Google Tech Talks December 9, 2005 Elisabeth Hendrickson gave a talk on how as more teams are adopting Agile practices such as XP and Scrum, software testing teams are being asked to become "Agile" as well... View it here

Wednesday 25 March 2009

Using Twist With Different Selenium Versions

When you start using Twist you will find that the twist team have baked selenium into twist.
You will also find that the version of selenium that has been integrated has some quirks.

Here is a simple guide for using any Selenium version with Twist.

You can also use this guide to help you implement other drivers like WebDriver.

Step 1
Add the selenium-java-client-driver.jar and the selenium-server.jar of your preferred selenium release (I'm going to be using my old friend selenium.0.9.2) to the Twist project class path, make sure these jars are loaded first in the class path order.


Step 2
You need to create a factory class that will create the Selenium instance for your test suite.

For example, let’s create "SeleniumFactory.java" in the Twist source folder.

import java.io.FileInputStream; import java.util.Properties;
import org.apache.tools.ant.types.Commandline;
import org.openqa.selenium.server.RemoteControlConfiguration;
import org.openqa.selenium.server.SeleniumServer;
import org.openqa.selenium.server.cli.RemoteControlLauncher;
import com.thoughtworks.selenium.DefaultSelenium;
import com.thoughtworks.selenium.Selenium;

public class SeleniumFactory { private SeleniumServer server; private Selenium selenium; }

public void start() {
try {
Properties properties = new Properties();
properties.load(new FileInputStream(getClass().getClassLoader().getResource("twist.properties").getFile()));
}

String[] serverOptions = Commandline.translateCommandline((String) properties.get("selenium.server.options"));
RemoteControlConfiguration serverConfiguration = RemoteControlLauncher.parseLauncherOptions(serverOptions);
String browserLauncher = (String) properties.get("selenium.browserLauncher");
String browserURL = (String) properties.get("selenium.browserURL");

server = new SeleniumServer(serverConfiguration);
server.start();

selenium = new DefaultSelenium("localhost", serverConfiguration.getPort(), browserLauncher, browserURL);
selenium.start();

} catch (Exception e) {
throw new RuntimeException(e);
}

public void stop() {
try {
if (selenium != null) {
selenium.stop();
}
} finally {
if (server != null) {
server.stop();
}
}
}

public Selenium geSelenium() {
return selenium;
}


Step 3
We now need to remove any bean definitions with id="seleniumFactory" and id="selenium" from the applicationContext-suite.xml file.
You will have to manually edit, the "applicationContext-suite.xml" of the twist project, and add the following bean definitions.

Step 4
The workflows will now have to depend on the Selenium interface.

For example,
public NewWorkflow(Selenium selenium) {
this.selenium = selenium;
}

With the above changes, your scenarios will be now use the selenium server and driver jars included in “step a”.

Note:

The selenium options are not the same between releases. The beta-2 release of Selenium RC contains some significant changes you should be aware of. You will need to account for these and change these values in "twist.properties" as required.

For example:

selenium 0.9.2 config : selenium.browserLauncher = *firefox
selenium 1.o Beta 2 config : selenium.browserLauncher = *firefoxproxy

selenium 0.9.2 config : selenium.server.options = -port 4545 -avoidProxy
selenium 1.o Beta 2 config : selenium.server.options = -port 4545 -avoidProxy -honor-system-proxy -singleWindow

For a full list of the changes take a look at http://clearspace.openqa.org/community/selenium/blog/2009/01/13/selenium-rc-beta-2-goodies-and-gotchas

Monday 23 March 2009

Choosing Twist - Part two

The next Big Thing

Enter our next big project. We (the QA team) held a huddle to talk about the capability of Fit and FitNesse for the new project. We all agreed that we would sooner avoid it if at all possible, in fact the language used during the meeting was much stronger that that. It was around that time I was shown a demo install of Twist, and immediately asked the question to the guy showing me “what is the problem I have with FitNesse that Twist will resolve?”, he looked at me as if I was crazy then reeled off all of the pain points that we knew and then some.

It’s worth having a look I thought, and installed the 30 day trial. One of the guys who had been very instrumental in fighting the FitNesse issues also installed it, and we said we would take a look at it and then compare notes.

At first we didn’t get it, but some things are very apparent with Twist.

For those of you that don’t know, Twist is built on top of the eclipse IDE, I should mention here (for existing Eclipse users), that it’s not currently available as a plug-in for an existing eclipse install (however that feature is in the programme plan). That means it comes with the entire feature set that eclipse does, e.g. integration with source control (CVS or SVN for example) albeit via a plug-in. The team write their tests, run them locally to make sure they work, and commit.













Search
You can search across your code, files or project. Search and replace across an open file or the whole project.







Keyword completion.

This last feature is biggie for us. If you create a method call UserOpensUrl then the next time you type v-e-r and press Ctrl+Space it pops up a list of matching methods.











straight away people can see what methods that have the phrase “ver” in them have already been written.

If you have used eclipse before, then all the usual features and short cut keys you love are available (I’m still learning new ones every day).

But what about twist? What does that bring to the party?

The most striking thing is the WYSYWIG scenario editor.














If you have grappled with fitnesse then the ability to make text Bold and Italic with the click of a button is great.




Not having to remember any mark-up language allows you to concentrate on the task in hand, writing tests.

The scenarios are broken down into several areas. The first part is where you write your test prose, the purpose or intent of the test. The second part is where the test proper is written.



Tests are currently written using either bullet points or tables.

And just like fitnesse a passed test turns green.




The stats for all the tests are given in a separate panel

The scenario editor allows a scenario to have one or more tags associated with it. These can be tags such as “QA Complete” or “shopping cart” or “smoke” or all three or none. These tags come into their own when you only want to run particular types of test. E.g. if you only want to execute tests that are QA Complete and are for the Shopping Cart area of your site; this is especially useful for CI builds.

Tags can be applied at a scenario level (while you are editing a scenario) or then can be applied or removed en masse.



When you have written your tests they need to be instrumented.
This can be achieved in a number of ways. Either right click and select Quick Fix from the context menu


Or use the quick-key combination of Ctrl+1 and you are presented with another context menu

“Create method” simply creates an appropriately named empty method ready for you to work on.

Again this is great where you don’t have development skills within your team as your team can focus on writing the scenarios and the dev team work on instrumenting the empty methods.


Choosing Record from the context menu starts a selenium server (which only currently supports Firefox) and fires up firefox. The record system differs a liitle bit in that it first runs though any previous steps in test if you have written any, then begins recording at the new point in your test, and completes the method.


I have only touched on a few good points here, those which counter the main pain points we had with Fitnesse. One of the communities biggest gripes with Twist is the lack of good documentation. Which when you consider it is a paid for product is a little shameful.

i should also point out that while i was writing this post, ThoughtWorks Studios release the first GA version of Twist, which i have yet to try out....

Ferrograph SDX - scrolling LED sign

One important aspect of being Agile is have short feedback loops. Wherever possible we aspire to reduce the feedback times, and one aid in decreasing feedback time is a feedback monitor. Having a some sort of visual indicator for the state of the project is obviously valuable. Its believed that if used correctly it can boost productivity and morale within the team considerably, however there is a certain amount of Kudos to be gained for having anything more than a VGA screen.

If you have a quick Google for Extreme Feedback Devices (no i'm not talking about a Marshall JCM800, 4x12" cab and a Gibson Les Paul) will turn up a large amount of hits for glowing orbs, VGA screens, lava lamps, lots of bespoke devices powered by Atmel AVRs, devices controlled by X10, water features, gummi bears and so on. One post by Dirk Ziegelmeier caught my eye, he had used a scrolling LED message board.

I picked up an old Ferrograph Aurora 64 SDX moving message / LED wall board, the type that is typically used in a call centre environment. In fact our call centres employ these boards to show the inbound call queue, and average call waiting times etc. for the Avaya Index ACD system.

The Ferrograph SDX boards turn up quite frequently on Ebay, and i had (somewhat naively) assumed that there would be enough info on the internet for me to be able to integrate the sign into our build monitor. Like i say, i had assumed, and as Eric Bogosian put it so eloquently in "Under Siege 2: Dark Territory (1995)" , "assumption is the mother of all f**ck ups".

The sign came with a short pig tail of cat5 cable terminated in an RJ45 plug. I have seen these signs listed on eBay as being "networked", this is an incorrect assumption being made by the seller as this is not a network cable but a serial cable. In my case the sign had been configured for RS422. Being a hardware geek more than a software geek, meant i had the sign opened up and running on RS232 in a matter of minutes.

Inside the sign is a main-board, at the end of which there is an RS422 header and RS232 header. Although there didn't seem to be a pin out that i could see printed on the PCB i traced back a little bit to work out which pin was which (a guesstimate based on where the pins ended up), i soldered up a 9pin D-connector to three wires (gnd, Rx, Tx) and stuffed it into the back of my PC. I took a guess at 9-n-1 (as its fairly standard HW setting) and sent "hello world" using the Alpha 1-byte protocol. The sign displayed nothing, nada, zip, bupkis, diddly-squat, zilch, bugger.

A long hard trawl of the internet turned up nothing. A few posts on various "Q&A" sites asking for info on these boards, but little else. A few lunchtimes of hunting and i came across a posting on makezine where Robert Coward said he had successfully reverse engineered the SDX signs.

I dropped Robert a line, and he very generously sent me the secret i needed to know for me to be able to get "hello world" onto the sign.

Because the Ferrograph SDX signs were designed for call centre ACD systems using the Avaya Index telephone system, it expect to be sent the phrase "SDX" at the start of every packet. It was that one nugget of information that allowed me to get going with the SDX wallboard but it was Roberts reverse engineering that really enabled me to push the sign to its limits - which took all of about 5 minutes!

There are many things wrong with the SDX sign, in fact possibly more defects than working features. I'm guessing that when Ferrograph tested the sign they only tested the features that would be needed for the sign to operate with the ACD system and not the features that are typical of the Betabrite or Alpha LED moving message led displays.

Anyhoo, long story short, Robert has created some new firmware ADF for the board, that does the following amazing things:

* Conforms to the publicly available Alpha protocol. It works with ready made applications, and it's easy to write your own applications for it.
* It has lots of fancy effects (snow, dissolve, drop down, cursor wipe and many more), and smooth continual scrolling, as well as all standard wipes/scrolls.
* Supports small/large normal and fancy character fonts; one or two line operation for small fonts. Double wide, flashing and colour shift flashing options available.
* Automatic word fit and centering to ensure a presentable display at all times.
* Many colour combinations, including three colour rainbow and two colour stripes.
* Full support for pictures and animations; configurable animation speed.
* Sophisticated & flexible memory management.
* Full real time and date display support in various formats.
* Ability to set messages to appear and disappear at certain times/days.
* Control of two optically isolated misc IO signals available on some Aurora 63 units, allowing automatic control of external buzzers, lights and even mains appliances (via suitable interface hardware) in synchronisation with message displays.
* Serial readback for message data and system/error status.
* Works on RS232 or RS485 interfaces.
* Serial timeout message option to display an error or blank message if host serial comms fails.
* Automatically detects Z8S180 CPUs on modern units and switches to high speed internally for maximum performance. Also automatically handles boards hard wired to double speed operation.

I'm sure you will agree that its some features list. Robert has turned what was previously a skippable board into a fantastic fully featured LED scrolling message board that would cost thousands of pounds.

A big feature here is that the with the ADF firmware the sign talks Alpha protocol, 1-byte, 2-byte and 3-byte. The Alpha protocol is publicly available and therefore there is much info available on the internet about it including sample code.

For example, in Linux i can:


echo "_01Z00_02A0hello world_04" > /dev/ttyS0


I contacted Robert again and purchased a copy of the firmware off him, which he supplied on an EEPROM ready to drop straight into the mainboard of the sign.
I powered up the sign and put and sent it "hello world" and basked in the 3 colour glow. within minutes a small crowd had gathered around my desk, Ooh-ing and Ahh-ing at the effects. You see if you sent the old SDX firmware "hello world", that's what you got however with Roberts ADF firmware, the sign centres the text and positions it in the middle of the two rows, then applies various effects to the text. The crowd were pleased.

Ok, so know i needed to make this sign work hard, i needed it to parse the cctray.xml that is generated by Cruise. Surely this is something i'd find ready made on the t'interweb? Well yes and no. Lots of people are parsing the xml that is output from Cruise Control, but we are using Cruise, and cctray.xml was what i wanted to parse (at first at least). But that as they say, is another story.

If you have a Ferrograph Aurora (Aurora 62, 63 or 64, new or old hardware type) SDX display / wallboard and you are interested in Roberts firmware you can contact him directly via email: robertcoward{at}gmail{dot}com

Friday 27 February 2009

Choosing Twist - Part One


Choosing Twist - Saying goodbye to Fitnesse

I wont go into the whys and how’s in this post, but my team and i found ourselves grappling with FitNesse. It was a huge undertaking for us. Something of a torrid affair, all love hate and very emotive. But now we have embarked on a new journey of discovery and left FitNesse behind moving to twist. Here in this post, I will hopefully capture why and how of the transition.

Before I go any further I should perhaps lay out my stall. We are a large website operating in the UK. When I say large, I’m talking in terms of the footfall on the site and the revenue generated. Around 500,000,000 page views a month genberated by 10% of the UK population. Traditionally all of our testing had been manual and we were a Prince II-ish development house; aka the waterfall approach.

All of our testers are career testers and should not be confused with TDD developers, or BA’s or UAT-ers. Our testers are plain old fashioned, passionate (ISEB accredited) functional and non functional software testers. To us, testing is our career not a stepping stone to project management.

Our scope was varied. We would test from just above Unit testing all the way up to and including UAT (that is most of the right hand side of the V model). Yes we are testers, but we are technical testers; in that we understand HTTP, and Ajax and CSS and load balancers, edge suites, 3DNS, PL-SQL, Jboss, Apache, J2EE etc etc. Experience has taught us that on the whole we understand a lot more than your average "web tester" . However we don’t do code, ever, never ever.

In the past we have used a number of what could be described as "de-facto" test tools, most notably the Microsoft Office suite (Excel spreadsheets, Word templates and so on). Of course there has been much talk and pitches from prospective Testing Tool vendors over the years, and when my team was first learning how to get Fit with FitNesse another department was seriously evaluating QTP and Axe, more on that later.

We were originally introduced to FitNesse through a project that we had decided we would run using Agile principles (note the big A, we went all out). We had some guidance from Agile partners ThoughtWorks who were helping us find our way in the Agile world, and they said we would need to seriously consider test automation, and suggested a few tools, one of which was FitNesse.

I have made a previous comment that the first time we used FitNesse, we felt like the Apes in the beginning of A Space Odyssey 2001. There we were staring up at this great monolith, with no idea what to do with it, but we knew it was important.

Through some heavy googling we settled on the Webtest fixture by Gojko Adzic. We soon became comfortable with FitNesse and Gojkos fixture. We rapidly begun to out grow the webtest fixture. Before we knew it, the thing we called the webtest fixture contained very little of Gojkos code and had code specific to some of the idiosyncrasies of our website; it had become our DSL. Yes, we were writing code. This was quite scary for us, because who tests the testers tests? However no sooner had we begun to feel very comfortable with writing the fixtures we needed to drive selenium or execute a stored procedure than we started to feel the pain of FitNesse. Here is a short (by no means exhaustive) in no particular order list of our pain points:

Version control
Version control in FitNesse or with FitNesse, is one thing that still sends shudders down my spine. I know it can be done (we achieved it to some extent) but we had little faith in it, and consequently the team would keep local revisions on their machines.

Duplicate code
The code was continually being replicated/duplicated and different testers would write different methods to do the same thing, e.g

  • The user navigates to <url>

  • The user Opens <url>

  • The URL <url> is opened

  • Open the Bowser and Types <url>


Despite our best efforts to keep an internal dictionary (and an ant task to generate html JavaDocs) we would still find duplicate (or very similar) methods.

Wiki Markup
The wiki mark-up language. I know its fairly straight forward, but when you have to escape words that you need to use in your test so that FitNesse doesn’t try and parse it, it gets overly complicated quickly. I want to write a test quickly, not have to remember the syntax for bold italics.

Metrics
Lack of metrics, seriously is this a test tool? Not having metrics available after the test had run was and still is the Achilles heel of FitNesse.

Setup and Teardown
Setup and Teardown – this seemed great at first, until we discovered that when we wanted to pass parameters around the tests based on an initial value that was discovered in a setup routine we had to jump through hoops. Things are further compounded when you have a large suite that needs multiple setups and teardowns to allow the tests to be independent.

Lack of Documentation

While there is some documentation out there a lot of it confusing. Even the FitNesse website is confusing running inside FitNesse it has pages that lead nowhere.

Tables
We found that because of the nature of the beast we needed to express all of our tests as tables. We learnt how to use Fitlibrary's DoFixture to help us get around that limitation. however, once we had gained a better understanding of the what we wanted to test we would end up back in a table. The downside of this is that we may have wanted a test to read:

"a registered user returns to the site and successfully logs in."

The tabular nature of the test would coerce us into using each row in the table as a step to achieve the goal.

The whole table thing was such a PITA that i looked at using David Peterson's Concordion, it seems like a good alternative because it doesn't have the same dependence on tables, but i didn't want to disappear Alice like down a the rabbit hole.

Throughout this painful period we just kept thinking it’s our fault because we don’t know how to use Fit and FitNesse is just exasperating that. However there became point in time when we realised that we did posses more FitNesse knowledge than we gave ourselves credit for and that realisation galvanised the thought that FitNesse was unsuitable for what we wanted. Ergo, there must be a better way.
In short, we had a real love hat relationship with FitNesse. While FitNesse seemed to present us with more challenges than benefits, the benefits outweighed the effort to make it all work, and to that end, I became very proactive, and begun selling FitNesse to the business at large.

Remember that evaluation of QTP I was talking about? This was partly driven by a new development manager who was eager to influence the department with his experiences of continuous integration; not having any existing automated tests was a non starter for him.
We spent a significant amount of money on some POC work to see if QTP and Odin’s AXE would work for us. It took 5 consultants 1 week to complete that proof if concept and demo their findings, however that work was quickly put to shame by a tester who had never used FitNesse or selenium, but had seen its potential on our project. In one evening at home he was able to create an end to end test for his product, whereas the consultancy team who conducted the POC work (hint: this was their product) were unable to automate more than one fifth of the application in that week, and even then they could only drive Microsoft Internet Explorer. In one fell swoop FitNesse and Selenium saved the department £40,000 and another team adopted FitNesse.

Meanwhile our life with FitNesse was coming to an end, or so we thought. Our toe dipping exercise into the world of Agile was hailed by all as a success, and so it was decreed that from now on it was the Agile way or no way.

End of Part One

Wednesday 11 February 2009

Debian - Backup and restore instaled software

- This is just a quick post so i can remember how i did something -

So I have taken an old Ferrograph SDX display and put new firmware on it ready to integrate it into an XFD (more on this in a later post).

I have decided to drive the display using Perl as there is a requirement to get RSS and xml and parse it appropriately build the packets for the sign and send them to the serial port of the host PC. I chose Perl because it has all of that stuff ready to go and i can develop on my local pc (windows) and port it to Linux fairly easily.

Anyhoo, i had already built an IRC server using an old junked desktop machine for the teams to use as a communication tool, and i built that on top of Debian etch. I couldn't use the same machine for the XFD because it doesn't have a serial port. I got hold of another junked desktop machine with a serial port and installed ebian etch on there too, and soon realised i couldn't remember what software i had installed on the IRC box. I wanted to keep them in sync.

Backup list of installed software

to find out what software i had installed i used the dpkg command to list installed software:


$ dpkg --get-selections


I then redirected the output into a file to store the list of installed software.

$ dpkg --get-selections > ~/installed-software.log

on the other box i used sftp to fetch the file, and used that to select the software i wanted to install

All i had to do is type following two commands:

# dpkg --set-selections < installed-software.log

Now my list is imported i need to use dselect to install the package.

# dselect

Select 'i' at the menu for install the software, answering Y to the question about additional space.

Job done!

Wednesday 4 February 2009

going round the Twist

I haven't posted for a while, and that's because we have been up to our ears in Twist.

Twist is a tool made by ThoughtWorks that take a lot of the good ideas from FitNesse and bolts them on top of eclipse.

I cant be too harsh on Twist because it is still in beta, however we have had a lot of pain to deal with.

For one thing, ThoughtWorks appear to have thrown the baby out with the bath water as the supplied selenium appears to be altogether different.

So this is where we are.

We have Twist installed on all the testers machines, and we have installed subclipse into Twist. This takes care of the version control issues we had with fitnesse.

One issue we did run into was getting the Twist project to play nice within the main project Trunk (more on this later).

The guys are writing the prose like business speak heavy tests quickly (the Twist Scenarios) however the instrumentation of those tests has become a little bottle neck, and some dev guys have come on board to help us with that.

What was interesting was seeing the dev guys pulling their hair out over Twist too.

I will post a fuller write up soon that explains how we run the twist tests (they are running as part of the CI build) and how we get reports from those tests.

Stuart.