Wednesday, 17 December 2008

Peer code reviews

Peer code reviews can have a devastating effect on software bugs, greatly reducing their number and scope. However, not all code reviews are the same and some are more effective than others. There is no one-shoe-fits-all guide to code reviewing, but there are ways of getting the right code review process for your situation.
At the bottom end of the effectiveness scale is a peer looking at a screen resplendent with the review code. This is better than no review, but still has drawbacks:
  • The review procedure will vary according to the reviewer and their state of mind, some reviewers will be more picky about certain issues or even miss things others would find easily.
  • Any problems found and commented on in the review may not be fixed - due to forgetfulness, other 'more important' pressing issues or thinking that they are trivial (to the author under review). Anyhow, the problems are probably not followed up afterwards to make sure they're resolved.
  • A lack of reviewing guidelines can lead to some comments being taken a bit too personally.
  • Most people when reviewing will only look through the code once from top to bottom, and pick out any issues of any type as they go along (syntax issues, loop counting and memory allocations for example). This is much less efficient than looking for one type of issue at a time, and scanning the code once for each type - if you don't believe me, try it!
In order to mitigate missing as many bugs as possible during a code review, the generally agreed best practice contains the following guidelines:
  1. Use a checklist, some examples can be found in the links below.
  2. Review for one type of issue at a time.
  3. Improve the checklist to look for bugs which slip through the reviews as time goes on.
  4. use code printouts on which you can scribble comments - I've not come across any software tool that is as effective as paper.
  5. Log any issues not resolved during the review process, for example in the form of a bug database entry - even it does seem trivial now, experience has taught me that it might be the the most important thing in the world next month! It should also be possible to make the reviewer aware of any previous issues found with the code under review.
  6. Have your process geared towards reviewing little and often.
  7. Be balanced - make sure to note what the code does successfully, as well as it's issues and offer praise where it is due.
More thorough details and further reading can be found here:

Thursday, 20 November 2008

Easy SCM

Software Configuration Management (SCM) is an important part of software development - without robust SCM, no software development project can achieve a high quality result.
To this end, there are a few simple characteristics that indicate good SCM - Personally, I haven't encountered a successful project whose SCM plan does not have these characteristics:
  1. Be a snail - leave a trail! Changes are never versioned without a documented reason.
  2. Documented reasons are always either defect fixes, or software enhancements. Both defects and enhancements can be identified by an ID without the need to log detailed explanations for each change - all that is required is for the change to cross reference the defect or enhancement (from a bug database or project plan, for example).
  3. Each change is made for a logical reason, and one reason only. Changes aren't combined or split over several different versions.
  4. It is easy to 'do some archaeology' by quickly running older configurations. e.g. There exists a store of automated nightly builds with corresponding configuration IDs available.
  5. The way of working is cultural, not 'enforced' - although it should be difficult to 'absent-mindedly' not follow the process. i.e. You can't make a change without a defect or enhancement ID to log against it.
  6. The SCM system is transparent and visible, so others with an interest can view progress (although not necessarily take part in the activities - that can be damaging to progress!).
  7. Each change represents no more than two man weeks work.
So, how does your project rate against these?

Friday, 31 October 2008

The IEEE software standards

The IEEE software standards is a very useful set of documents, even if you're not in an organisation that is into particularly formal software development methods. The standards do not necessarily have to be implemented to use the information they contain.
For example, they can be a good starting point if you're ever asked to plan or produce documents such as a software requirements specification (SRS), software configuration management plan (SCM plan), or a software architecture document (although for this I've found HP have available a much more usable document, "A Template for Documenting Software and Firmware Architectures").

The only downside is that they are not free, but many organizations have IEEE membership and have the standards available for employees.

Here's the list of IEEE standards and guides I've found to be of use, in no particular order, vaguely grouped for readability:

Project Management
  • 1490 - Adoption of PMI Standard - A Guide to the Project Management Body of Knowledge (PMBOK) - A guide to project management knowledge and practices in general widespread use.
  • 1045 - Standard for Software Productivity Metrics - A bit dated, but the practices and concepts it gives for measuring productivity are well described.
  • 1058 - Standard for Software Project Management Plans - Worth a look if you're unsure as to what you'll need to put in a project management plan, but a little too specific to the IEEE way of doing things.
  • ISO/IEC 12207.0 - Software Life Cycle Processes - Attempts to classify all the processes that contribute to software, and puts them into a framework.
  • ISO/IEC 12207.1 - Software Life Cycle Processes - Life Cycle Data - Attempts to classify all the processes that contribute to software, and puts them into a framework.
  • 982.1 - Standard Dictionary of Measures to Produce Reliable Software - A bit ancient (1988), but tries to give a description of measures that can be made on software and software projects that indicate the quality of the software, with all the mathematical rigour that involves.
  • 1220 - Standard for Application and Management of the Systems Engineering Process - Similar to 1074, again use CMMI instead.

Software Engineering
  • Guide to the Software Engineering body of Knowledge (SWEBOK) - Available here. A guide to software engineering knowledge and practices in general widespread use.
  • 610 - Glossary of Software Engineering Terminology - A little dated, and leans towards IEEE understanding (as opposed to widespread understanding) of some terms, but can still be useful for reference, and is referenced by most other IEEE standards.
  • 828 - Standard for Software Configuration Management Plans - If you need to produce an SCM plan and have nowhere to start, this will show you the way. Also discusses useful activities involved in managing and adhering to a SCM plan.
  • 1002 - Taxonomy for Software Engineering Standards - Also quite ancient. A taxonomy is a method for classification, and this describes how a set of standards can be chosen to cover all necessary areas of software engineering.
  • 1028 - Standard for Software Reviews - Gives criteria and practices for reviewing software - be it for development, acquisition or operation.
  • 1061 - Standard for a Software Quality Metrics Methodology - Aimed at those measuring or assessing the quality of software, in a formal manner.
  • 1074 - Standard for Developing Software Life Cycle Processes - Attempts to define a way of creating a good sw process. Not half as useful as CMMI.
  • 1471 - Recommended Practice for Architectural Description of Software-intensive Systems - A version of Kruchen 4+1 for software architecture.
  • 1042 - Guide to Software Configuration Management - Practices for performing SCM, and managing SC items within a project.

  • 730.1 - Guide for Software Quality Assurance Planning - Great if you need to write and manage a Software Quality Assurance Plan, and have no idea of where to start - this lists and discusses the contents of such a document and good practices involved in managing it.
  • 730 - Standard for Software Quality Assurance Plans - A lot more detailed than 730.1, and gives the format and content requirements a SQA plan should meet to conform to the IEEE standard.
  • 830 - Recommended Practice for Software Requirements Specifications - Describes what should be contained in a good (albeit formal) SRS, and gives several example outlines of SRS documents.
  • 1062 - Recommended Practice for Software Acquisition - Obtaining and using the right software, that's right for your needs is not an easy task. This gives some useful practices on performing this task.
  • 1063 - Standard for Software User Documentation - Good practices for putting the relevant information into your user documentation.
  • 1219 - Standard for Software Maintenance - This standard describes an iterative process for managing and executing software maintenance activities.
  • 1228 - Standard for Software Safety Plans - Establishes criteria for the content of a software safety plan.
  • 1233 - Guide for Developing System Requirements Specifications - A guide to obtaining and managing requirements in an SRS.

  • 829 - Standard for Software Test Documentation - Gives a description of what should be in software test documentation (cases, logs, plans etc), and why. Gives the form and content of test documents, but does not say which documents are needed in particular situations.
  • 1008 - Standard for Software Unit Testing - A standard for planning, building and executing unit tests.
  • 1012 - Standard for Software Verification and Validation Plans - Gives a standard for V&V plans, describing what inputs, outputs and criteria are recommended for a project's V&V activities and should be recorded in a plan.
  • 1044 - Guide to Classification of Software Anomalies - How to write and manage bug reports, very useful as in my experience, even some very experience software engineers have trouble in taking the time to write useful bug reports.
  • 1059 - Guide for Software Verification and Validation Plans - Gives a process for using and managing V&V plans.

  • 1016.1 - Guide to Software Design Descriptions - Concentrates on documenting and using 'views' into a design, much like the Krutchen 4+1 paper.
  • 1016 - Recommended Practice for Software Design Descriptions - How to go about writing a SDD, within the project life cycle.
  • 1209 - Recommended Practice for the Evaluation and Selection of CASE Tools - CASE tools are notoriously difficult to choose and use successfully (see an earlier post), this tries to guide you around this particular minefield.
  • 1320.1 - Standard for Functional Modelling Language - syntax and semantics for IDEF0 - A formal system/process modelling technique. Quite heavy, I've found ETVX to be much more usable.
  • 1320.2 - Standard for Conceptual Modelling Language Syntax and Semantics for IDEF1X97 (IDEFobject) - Again, a formal system/process modelling technique. Quite heavy, I've found ETVX to be much more usable.
  • 1348 - Recommended Practice for the Adoption of CASE Tools - Once you've got a CASE tool, the fun doesn't stop there. Carries on from 1209.

Saturday, 13 September 2008

Can you sell a software process?

Most software groups (and I'm sure that this doesn't apply to just software groups) with any sort of history have a record of trying out prescribed process improvement after process improvement, many of them ending in some sort of failure - generally meaning that the process didn't meet the expectation or goals it was introduced with.

This can be especially true of process improvements that are built around an expensive software tool. I've had my fair share of colleagues lost to some sales type or evangelist touting the latest and greatest in software silver bullets. The use of the word 'evangelist' is telling, and conjures up images of an overzealous individual pushing unfounded ideas at you - it gives an indication of the mysic aura that a 'process guru' can give themselves by which one can be blinded.

CASE tools aren't that useful when they constrain you too much to one process, and some companies that push tools as well as processes rely on a self-created cult-like following whipped up from the guru of the day's scribblings. When these words of wisdom are unfounded, unproven and (even better) not understandable, they can be taken by savvy marketers to sell the tool.

A prescribed process (from a heavy one to a very light or 'agile' one) may fit perfectly into the company, work environment, or culture it grew up in but moving it into another environment, more often than not, can transform it into a useless or even dangerous beast.

In the translation, you can lose the meaning of important concepts, miss out on understanding vital assumptions and all the other pitfalls associated with one person attempting to describe a complex system to another. No matter how many reams of written documents or concise manifestos one writes, some things just get missed - even when the two communicator's work cultures are very similar.

What would be much more useful and less prone to failure is an expert in processes to mentor a group over a period of time (even indefinitely), who can prescribe and tailor a process to meet their current needs, to put in place mechanisms to measure the success and progress of the group and ultimately advance the group's software capability - a process specialist or change agent.

A software CASE tool is no substitute for an experienced mentor!

Saturday, 23 August 2008

Ethical policies of professional IT bodies

I consider myself a professional software engineer. To be recognised as such in the wider community involves, amongst other things, subscribing to a professional ethical policy. Personally, I feel an ethical policy must involve not working on anything that will result in harm or even death to others (i.e. no 'defence' work). So which professional IT bodies would be suitable for me to join, with this in mind?

The BCS - avoids the issue by not defining an ethical policy - only has a code of conduct which

'... governs your personal conduct as an individual member of the BCS and not the nature of business or ethics of the relevant authority'.

This is a bit of a cop-out, and avoids the difficult questions altogether.

The IEEE - does have a code of ethics - which does, on first glance seem to fit the bill:

1. to accept responsibility in making decisions consistent with the safety, health and welfare of the public, and to disclose promptly factors that might endanger the public or the environment;
9. to avoid injuring others, their property, reputation, or employment by false or malicious action;

But I'm also aware that the IEEE has many high profile members who work in the defence industry and also produces specifications created and used in the defence industry.

The ACM - Much better, uses the term 'human rights', and takes the most care of these three bodies to make clear their ethical stance and to give direction:

This principle concerning the quality of life of all people affirms an obligation to protect fundamental human rights and to respect the diversity of all cultures. An essential aim of computing professionals is to minimize negative consequences of computing systems, including threats to health and safety. When designing or implementing systems, computing professionals must attempt to ensure that the products of their efforts will be used in socially responsible ways, will meet social needs, and will avoid harmful effects to health and welfare.

In addition to a safe social environment, human well-being includes a safe natural environment. Therefore, computing professionals who design and develop systems must be alert to, and make others aware of, any potential damage to the local or global environment.
There's also an interesting discussion of ethics in this paper here, and a very thorough discussion of ICT bodies and ethics, with recommendations here.

I'm still undecided - I believe I need to spend a lot more time researching this topic, as the standard policy for most IT organisations appear to allow members working in defence - even if it's only discernible by reading between the lines.

Tuesday, 29 July 2008

A Linux Literary Trilogy

When I started delving into the world of Linux development, I was not only befuddled by the strange code layout and conventions, I also found the culture and ethos of Linux very confusing. There were three books that were invaluable in pulling my understanding out of this quagmire, which I'll mention briefly:

The Linux Programmer's Toolbox by John Fusco. Allows your Linux usefulness to go from 0-60 in six seconds. Not totally exhaustive on all tools Linux, but it's brilliant for giving you an up to date map of the Linux development environment. Not only that, but it can give you a greater understanding of any development environment which uses make or GCC. I really can't recommend this book highly enough - it's so well written and laid out that I use it regularly as a reference manual. Not only does it cover many of the useful Linux tools (and shows you how to look for the rest), it covers how the kernel works, gnu make systems, debugging and has a nice comprehensive guide to using Vim and Emacs effectively (although, sadly, it doesn't say which is best - but I think you know the answer to that).

The Art of Unix Programming by Eric S. Raymond (Link is to the full book text). Once you have all the tools, you'll want to know how to use them. Not only does this book give you the why behind the what of Linux - explaining the design and implementation mechanisms that have shaped it, it gives an excellent narrative on the history and context in which these mechanisms evolved from someone who was there at the time. Make sure you absorb his 17 basics of the Unix philosophy, don't trust yourself to touch a line of code until you do! I'll repeat them here, just in case you miss them:

The Art of Happiness: A Handbook for Living, by the Dalai Lama. Not a technology book, but it has to be said some of the greatest challenges I've had in my working environment is not the code or the technology, but the people.
This book gives a good grounding in principles giving you a greater knowledge of oneself and others - based around the idea that compassion for others is the main source of happiness.
Useful for those times when you need to take a deep breath and stand back....

Thursday, 12 June 2008

Using the correct case

I think use cases are great, but unfortunately the term 'Use Case' has one meaning if you use them as a working tool, and more often than not, another meaning if used as a buzz-word in conversation.

It's a commonly used term by developers, but personally, I often cringe when someone in a management or marketing role uses it to identify a product feature or even just a single scenario - a clear case of use case misuse!

Use cases are a bit more technical than just an idea of how something may be used - the term was first coined by Iva in the 80's, and has since been expanded on greatly by the likes of Alistair Cockburn and his peers. They are a brilliant tool for gathering requirements and consist of nothing more than simple text descriptions.

However, even though a finished use case should be a shining beacon of simplicity and clarity, getting to that point can be devilishly complex and requires more than a little experience in writing use cases - the heuristic 'practice makes perfect' certainly holds true here. The trouble is, once you begin utilising use cases it quickly becomes apparent how useful they are in many more areas of software development than just requirements gathering.

A short list of areas where use cases can add value :
  • Requirements analysis - Is the use case describing what you want the system to do?
  • Requirement traceability - Justifying the inclusion of a particular piece of design or code.
  • Software design - Use cases can lead straight into the design phase, e.g. by using sequence diagrams for each important use case thread.
  • Planning and tracking - A set of use cases breaks the software system down into more manageable chunks, which can be planned and progress measured against.
  • Test design and writing - A use case is also a ready-made test case.
  • Release management - When deciding which features to include or wait for inclusion, use cases provide a mechanism for linking features to user goals.
  • Change management - Especially for iterative development, use cases can, for example, aid by scoping change requests to work estimates.

I won't go into more detail here, as plenty of insightful information is available elsewhere on these topics (See Alistair Cockburn's work, for example). Suffice to say that you can drive almost the whole software process using a use-case centric approach - not that you should, of course, and the circumstances to which you are fitting a process must always be considered carefully.

Once you realise the power of well-written use cases and understand the areas in which you will use them, it seems a good idea to spend more time considering their details and form - unfortunately there is no one-shoe-fits-all approach to this as the structure of a use case is very dependent on the domain and environment in which it is being used.

One example of this is for use cases geared towards embedded systems, where I have found the assumption of everything happening in 'zero time' to be very useful - you end a use case scenario whenever you have to wait for something. This approach would be unwieldy when writing a set of high-level use cases for a user interface, on the other hand.

One can spend many hours getting a relatively small use case or set of use cases together into a usable form that may appear to be a pedantic waste of time, but to an experienced use case author is time well spent. Many lessons are learnt and dead-ends reached when going through the process of writing and using use cases 'in anger', that time spent early on considering these lessons proves very beneficial in the long run.